00:00:00.001 Started by upstream project "spdk-dpdk-per-patch" build number 262 00:00:00.001 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.015 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.016 The recommended git tool is: git 00:00:00.016 using credential 00000000-0000-0000-0000-000000000002 00:00:00.018 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.031 Fetching changes from the remote Git repository 00:00:00.034 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.049 Using shallow fetch with depth 1 00:00:00.049 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.049 > git --version # timeout=10 00:00:00.084 > git --version # 'git version 2.39.2' 00:00:00.084 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.085 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.085 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.548 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.561 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.573 Checking out Revision 9b8cb13ca58b20128762541e7d6e360f21b83f5a (FETCH_HEAD) 00:00:02.573 > git config core.sparsecheckout # timeout=10 00:00:02.584 > git read-tree -mu HEAD # timeout=10 00:00:02.602 > git checkout -f 9b8cb13ca58b20128762541e7d6e360f21b83f5a # timeout=5 00:00:02.621 Commit message: "inventory: repurpose WFP74 and WFP75 to dev systems" 00:00:02.621 > git rev-list --no-walk 9b8cb13ca58b20128762541e7d6e360f21b83f5a # timeout=10 00:00:02.711 [Pipeline] Start of Pipeline 00:00:02.725 [Pipeline] library 00:00:02.727 Loading library shm_lib@master 00:00:06.550 Library shm_lib@master is cached. Copying from home. 00:00:06.580 [Pipeline] node 00:00:06.635 Running on WFP10 in /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:06.638 [Pipeline] { 00:00:06.647 [Pipeline] catchError 00:00:06.648 [Pipeline] { 00:00:06.659 [Pipeline] wrap 00:00:06.667 [Pipeline] { 00:00:06.674 [Pipeline] stage 00:00:06.675 [Pipeline] { (Prologue) 00:00:06.877 [Pipeline] sh 00:00:07.171 + logger -p user.info -t JENKINS-CI 00:00:07.187 [Pipeline] echo 00:00:07.188 Node: WFP10 00:00:07.199 [Pipeline] sh 00:00:07.492 [Pipeline] setCustomBuildProperty 00:00:07.500 [Pipeline] echo 00:00:07.501 Cleanup processes 00:00:07.504 [Pipeline] sh 00:00:07.781 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:07.781 1538989 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:07.795 [Pipeline] sh 00:00:08.075 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:08.075 ++ grep -v 'sudo pgrep' 00:00:08.075 ++ awk '{print $1}' 00:00:08.075 + sudo kill -9 00:00:08.075 + true 00:00:08.089 [Pipeline] cleanWs 00:00:08.104 [WS-CLEANUP] Deleting project workspace... 00:00:08.105 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.112 [WS-CLEANUP] done 00:00:08.136 [Pipeline] setCustomBuildProperty 00:00:08.148 [Pipeline] sh 00:00:08.424 + sudo git config --global --replace-all safe.directory '*' 00:00:08.497 [Pipeline] nodesByLabel 00:00:08.498 Found a total of 1 nodes with the 'sorcerer' label 00:00:08.508 [Pipeline] httpRequest 00:00:08.511 HttpMethod: GET 00:00:08.512 URL: http://10.211.164.101/packages/jbp_9b8cb13ca58b20128762541e7d6e360f21b83f5a.tar.gz 00:00:08.515 Sending request to url: http://10.211.164.101/packages/jbp_9b8cb13ca58b20128762541e7d6e360f21b83f5a.tar.gz 00:00:08.517 Response Code: HTTP/1.1 200 OK 00:00:08.518 Success: Status code 200 is in the accepted range: 200,404 00:00:08.518 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/jbp_9b8cb13ca58b20128762541e7d6e360f21b83f5a.tar.gz 00:00:09.462 [Pipeline] sh 00:00:09.743 + tar --no-same-owner -xf jbp_9b8cb13ca58b20128762541e7d6e360f21b83f5a.tar.gz 00:00:09.762 [Pipeline] httpRequest 00:00:09.767 HttpMethod: GET 00:00:09.768 URL: http://10.211.164.101/packages/spdk_cf8ec7cfe7cc045dd74b4dc37b0f52cad9732631.tar.gz 00:00:09.768 Sending request to url: http://10.211.164.101/packages/spdk_cf8ec7cfe7cc045dd74b4dc37b0f52cad9732631.tar.gz 00:00:09.775 Response Code: HTTP/1.1 200 OK 00:00:09.776 Success: Status code 200 is in the accepted range: 200,404 00:00:09.776 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk_cf8ec7cfe7cc045dd74b4dc37b0f52cad9732631.tar.gz 00:00:41.578 [Pipeline] sh 00:00:41.869 + tar --no-same-owner -xf spdk_cf8ec7cfe7cc045dd74b4dc37b0f52cad9732631.tar.gz 00:00:44.428 [Pipeline] sh 00:00:44.714 + git -C spdk log --oneline -n5 00:00:44.714 cf8ec7cfe version: 24.09-pre 00:00:44.714 2d6134546 lib/ftl: Handle trim requests without VSS 00:00:44.714 106ad3793 lib/ftl: Rename unmap to trim 00:00:44.714 5555d51c8 lib/ftl: Add means to create new layout regions 00:00:44.714 5d89ebb72 lib/ftl: Add deinit handler to FTL mngt 00:00:44.729 [Pipeline] sh 00:00:45.014 + git -C spdk/dpdk fetch https://review.spdk.io/gerrit/spdk/dpdk refs/changes/50/23150/8 00:00:45.584 From https://review.spdk.io/gerrit/spdk/dpdk 00:00:45.584 * branch refs/changes/50/23150/8 -> FETCH_HEAD 00:00:45.596 [Pipeline] sh 00:00:45.880 + git -C spdk/dpdk checkout FETCH_HEAD 00:00:46.140 Previous HEAD position was 08f3a46de7 pmdinfogen: avoid empty string in ELFSymbol() 00:00:46.140 HEAD is now at 023fd6c428 malloc: fix allocation for a specific case with ASan 00:00:46.151 [Pipeline] } 00:00:46.169 [Pipeline] // stage 00:00:46.177 [Pipeline] stage 00:00:46.180 [Pipeline] { (Prepare) 00:00:46.200 [Pipeline] writeFile 00:00:46.216 [Pipeline] sh 00:00:46.501 + logger -p user.info -t JENKINS-CI 00:00:46.515 [Pipeline] sh 00:00:46.800 + logger -p user.info -t JENKINS-CI 00:00:46.812 [Pipeline] sh 00:00:47.095 + cat autorun-spdk.conf 00:00:47.095 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:47.095 SPDK_TEST_FUZZER_SHORT=1 00:00:47.095 SPDK_TEST_FUZZER=1 00:00:47.095 SPDK_RUN_UBSAN=1 00:00:47.103 RUN_NIGHTLY= 00:00:47.107 [Pipeline] readFile 00:00:47.131 [Pipeline] withEnv 00:00:47.133 [Pipeline] { 00:00:47.146 [Pipeline] sh 00:00:47.431 + set -ex 00:00:47.431 + [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf ]] 00:00:47.431 + source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:00:47.431 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:47.431 ++ SPDK_TEST_FUZZER_SHORT=1 00:00:47.431 ++ SPDK_TEST_FUZZER=1 00:00:47.431 ++ SPDK_RUN_UBSAN=1 00:00:47.431 ++ RUN_NIGHTLY= 00:00:47.431 + case $SPDK_TEST_NVMF_NICS in 00:00:47.431 + DRIVERS= 00:00:47.431 + [[ -n '' ]] 00:00:47.431 + exit 0 00:00:47.441 [Pipeline] } 00:00:47.460 [Pipeline] // withEnv 00:00:47.465 [Pipeline] } 00:00:47.481 [Pipeline] // stage 00:00:47.491 [Pipeline] catchError 00:00:47.493 [Pipeline] { 00:00:47.508 [Pipeline] timeout 00:00:47.509 Timeout set to expire in 30 min 00:00:47.511 [Pipeline] { 00:00:47.526 [Pipeline] stage 00:00:47.528 [Pipeline] { (Tests) 00:00:47.544 [Pipeline] sh 00:00:47.830 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:47.830 ++ readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:47.830 + DIR_ROOT=/var/jenkins/workspace/short-fuzz-phy-autotest 00:00:47.830 + [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest ]] 00:00:47.830 + DIR_SPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:47.830 + DIR_OUTPUT=/var/jenkins/workspace/short-fuzz-phy-autotest/output 00:00:47.830 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk ]] 00:00:47.830 + [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:00:47.830 + mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/output 00:00:47.830 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:00:47.830 + cd /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:47.830 + source /etc/os-release 00:00:47.830 ++ NAME='Fedora Linux' 00:00:47.830 ++ VERSION='38 (Cloud Edition)' 00:00:47.830 ++ ID=fedora 00:00:47.830 ++ VERSION_ID=38 00:00:47.830 ++ VERSION_CODENAME= 00:00:47.830 ++ PLATFORM_ID=platform:f38 00:00:47.830 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:47.830 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:47.830 ++ LOGO=fedora-logo-icon 00:00:47.830 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:47.830 ++ HOME_URL=https://fedoraproject.org/ 00:00:47.831 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:47.831 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:47.831 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:47.831 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:47.831 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:47.831 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:47.831 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:47.831 ++ SUPPORT_END=2024-05-14 00:00:47.831 ++ VARIANT='Cloud Edition' 00:00:47.831 ++ VARIANT_ID=cloud 00:00:47.831 + uname -a 00:00:47.831 Linux spdk-wfp-10 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:47.831 + sudo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:00:50.372 Hugepages 00:00:50.372 node hugesize free / total 00:00:50.372 node0 1048576kB 0 / 0 00:00:50.372 node0 2048kB 0 / 0 00:00:50.372 node1 1048576kB 0 / 0 00:00:50.372 node1 2048kB 0 / 0 00:00:50.372 00:00:50.372 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:50.372 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:50.372 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:50.372 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:50.372 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:50.372 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:50.372 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:50.372 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:50.372 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:50.372 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme1 nvme1n1 00:00:50.372 NVMe 0000:5f:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:00:50.372 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:50.372 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:50.372 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:50.372 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:50.372 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:50.372 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:50.372 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:50.372 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:50.372 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme2 nvme2n1 00:00:50.372 + rm -f /tmp/spdk-ld-path 00:00:50.372 + source autorun-spdk.conf 00:00:50.372 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:50.372 ++ SPDK_TEST_FUZZER_SHORT=1 00:00:50.372 ++ SPDK_TEST_FUZZER=1 00:00:50.372 ++ SPDK_RUN_UBSAN=1 00:00:50.372 ++ RUN_NIGHTLY= 00:00:50.372 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:50.372 + [[ -n '' ]] 00:00:50.372 + sudo git config --global --add safe.directory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:50.372 + for M in /var/spdk/build-*-manifest.txt 00:00:50.372 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:50.372 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:00:50.372 + for M in /var/spdk/build-*-manifest.txt 00:00:50.372 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:50.372 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:00:50.372 ++ uname 00:00:50.372 + [[ Linux == \L\i\n\u\x ]] 00:00:50.372 + sudo dmesg -T 00:00:50.372 + sudo dmesg --clear 00:00:50.372 + dmesg_pid=1540017 00:00:50.372 + [[ Fedora Linux == FreeBSD ]] 00:00:50.372 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:50.372 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:50.372 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:50.372 + [[ -x /usr/src/fio-static/fio ]] 00:00:50.372 + export FIO_BIN=/usr/src/fio-static/fio 00:00:50.372 + FIO_BIN=/usr/src/fio-static/fio 00:00:50.372 + sudo dmesg -Tw 00:00:50.372 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\s\h\o\r\t\-\f\u\z\z\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:50.372 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:50.372 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:50.372 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:50.372 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:50.372 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:50.372 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:50.372 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:50.372 + spdk/autorun.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:00:50.372 Test configuration: 00:00:50.372 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:50.372 SPDK_TEST_FUZZER_SHORT=1 00:00:50.372 SPDK_TEST_FUZZER=1 00:00:50.372 SPDK_RUN_UBSAN=1 00:00:50.372 RUN_NIGHTLY= 19:59:37 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:00:50.372 19:59:37 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:50.372 19:59:37 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:50.372 19:59:37 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:50.372 19:59:37 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:50.372 19:59:37 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:50.372 19:59:37 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:50.372 19:59:37 -- paths/export.sh@5 -- $ export PATH 00:00:50.372 19:59:37 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:50.372 19:59:37 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:00:50.372 19:59:37 -- common/autobuild_common.sh@437 -- $ date +%s 00:00:50.372 19:59:37 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715882377.XXXXXX 00:00:50.372 19:59:37 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715882377.JdnbCX 00:00:50.372 19:59:37 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:00:50.372 19:59:37 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:00:50.372 19:59:37 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:00:50.372 19:59:37 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:50.372 19:59:37 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:50.372 19:59:37 -- common/autobuild_common.sh@453 -- $ get_config_params 00:00:50.372 19:59:37 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:00:50.372 19:59:37 -- common/autotest_common.sh@10 -- $ set +x 00:00:50.373 19:59:37 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:50.373 19:59:37 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:00:50.373 19:59:37 -- pm/common@17 -- $ local monitor 00:00:50.373 19:59:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:50.373 19:59:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:50.373 19:59:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:50.373 19:59:37 -- pm/common@21 -- $ date +%s 00:00:50.373 19:59:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:50.373 19:59:37 -- pm/common@21 -- $ date +%s 00:00:50.373 19:59:37 -- pm/common@25 -- $ sleep 1 00:00:50.373 19:59:37 -- pm/common@21 -- $ date +%s 00:00:50.373 19:59:37 -- pm/common@21 -- $ date +%s 00:00:50.373 19:59:37 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715882377 00:00:50.373 19:59:37 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715882377 00:00:50.373 19:59:37 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715882377 00:00:50.373 19:59:37 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715882377 00:00:50.633 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715882377_collect-vmstat.pm.log 00:00:50.633 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715882377_collect-cpu-load.pm.log 00:00:50.633 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715882377_collect-cpu-temp.pm.log 00:00:50.633 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715882377_collect-bmc-pm.bmc.pm.log 00:00:51.573 19:59:38 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:00:51.573 19:59:38 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:51.573 19:59:38 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:51.573 19:59:38 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:51.573 19:59:38 -- spdk/autobuild.sh@16 -- $ date -u 00:00:51.573 Thu May 16 05:59:38 PM UTC 2024 00:00:51.573 19:59:38 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:51.573 v24.09-pre 00:00:51.573 19:59:38 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:51.573 19:59:38 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:51.573 19:59:38 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:51.573 19:59:38 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:00:51.573 19:59:38 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:00:51.573 19:59:38 -- common/autotest_common.sh@10 -- $ set +x 00:00:51.573 ************************************ 00:00:51.573 START TEST ubsan 00:00:51.573 ************************************ 00:00:51.573 19:59:38 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:00:51.573 using ubsan 00:00:51.573 00:00:51.573 real 0m0.000s 00:00:51.573 user 0m0.000s 00:00:51.573 sys 0m0.000s 00:00:51.573 19:59:38 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:00:51.573 19:59:38 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:51.573 ************************************ 00:00:51.573 END TEST ubsan 00:00:51.573 ************************************ 00:00:51.573 19:59:38 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:51.573 19:59:38 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:51.573 19:59:38 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:51.573 19:59:38 -- spdk/autobuild.sh@51 -- $ [[ 1 -eq 1 ]] 00:00:51.574 19:59:38 -- spdk/autobuild.sh@52 -- $ llvm_precompile 00:00:51.574 19:59:38 -- common/autobuild_common.sh@425 -- $ run_test autobuild_llvm_precompile _llvm_precompile 00:00:51.574 19:59:38 -- common/autotest_common.sh@1097 -- $ '[' 2 -le 1 ']' 00:00:51.574 19:59:38 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:00:51.574 19:59:38 -- common/autotest_common.sh@10 -- $ set +x 00:00:51.574 ************************************ 00:00:51.574 START TEST autobuild_llvm_precompile 00:00:51.574 ************************************ 00:00:51.574 19:59:38 autobuild_llvm_precompile -- common/autotest_common.sh@1121 -- $ _llvm_precompile 00:00:51.574 19:59:38 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ clang --version 00:00:51.574 19:59:38 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ [[ clang version 16.0.6 (Fedora 16.0.6-3.fc38) 00:00:51.574 Target: x86_64-redhat-linux-gnu 00:00:51.574 Thread model: posix 00:00:51.574 InstalledDir: /usr/bin =~ version (([0-9]+).([0-9]+).([0-9]+)) ]] 00:00:51.574 19:59:38 autobuild_llvm_precompile -- common/autobuild_common.sh@33 -- $ clang_num=16 00:00:51.574 19:59:38 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ export CC=clang-16 00:00:51.574 19:59:38 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ CC=clang-16 00:00:51.574 19:59:38 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ export CXX=clang++-16 00:00:51.574 19:59:38 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ CXX=clang++-16 00:00:51.574 19:59:38 autobuild_llvm_precompile -- common/autobuild_common.sh@38 -- $ fuzzer_libs=(/usr/lib*/clang/@("$clang_num"|"$clang_version")/lib/*linux*/libclang_rt.fuzzer_no_main?(-x86_64).a) 00:00:51.574 19:59:38 autobuild_llvm_precompile -- common/autobuild_common.sh@39 -- $ fuzzer_lib=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:00:51.574 19:59:38 autobuild_llvm_precompile -- common/autobuild_common.sh@40 -- $ [[ -e /usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a ]] 00:00:51.574 19:59:38 autobuild_llvm_precompile -- common/autobuild_common.sh@42 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a' 00:00:51.574 19:59:38 autobuild_llvm_precompile -- common/autobuild_common.sh@44 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:00:51.834 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:00:51.834 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:00:52.404 Using 'verbs' RDMA provider 00:01:05.580 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:17.800 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:17.800 Creating mk/config.mk...done. 00:01:17.800 Creating mk/cc.flags.mk...done. 00:01:17.800 Type 'make' to build. 00:01:17.800 00:01:17.800 real 0m25.768s 00:01:17.800 user 0m12.377s 00:01:17.800 sys 0m12.417s 00:01:17.800 20:00:04 autobuild_llvm_precompile -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:01:17.800 20:00:04 autobuild_llvm_precompile -- common/autotest_common.sh@10 -- $ set +x 00:01:17.800 ************************************ 00:01:17.800 END TEST autobuild_llvm_precompile 00:01:17.800 ************************************ 00:01:17.800 20:00:04 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:17.800 20:00:04 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:17.800 20:00:04 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:17.800 20:00:04 -- spdk/autobuild.sh@62 -- $ [[ 1 -eq 1 ]] 00:01:17.800 20:00:04 -- spdk/autobuild.sh@64 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:01:17.800 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:01:17.800 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:18.059 Using 'verbs' RDMA provider 00:01:28.979 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:41.191 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:41.191 Creating mk/config.mk...done. 00:01:41.191 Creating mk/cc.flags.mk...done. 00:01:41.192 Type 'make' to build. 00:01:41.192 20:00:26 -- spdk/autobuild.sh@69 -- $ run_test make make -j88 00:01:41.192 20:00:26 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:01:41.192 20:00:26 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:41.192 20:00:26 -- common/autotest_common.sh@10 -- $ set +x 00:01:41.192 ************************************ 00:01:41.192 START TEST make 00:01:41.192 ************************************ 00:01:41.192 20:00:26 make -- common/autotest_common.sh@1121 -- $ make -j88 00:01:41.192 make[1]: Nothing to be done for 'all'. 00:01:41.451 The Meson build system 00:01:41.451 Version: 1.3.1 00:01:41.451 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user 00:01:41.451 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:41.451 Build type: native build 00:01:41.451 Project name: libvfio-user 00:01:41.451 Project version: 0.0.1 00:01:41.451 C compiler for the host machine: clang-16 (clang 16.0.6 "clang version 16.0.6 (Fedora 16.0.6-3.fc38)") 00:01:41.451 C linker for the host machine: clang-16 ld.bfd 2.39-16 00:01:41.451 Host machine cpu family: x86_64 00:01:41.451 Host machine cpu: x86_64 00:01:41.451 Run-time dependency threads found: YES 00:01:41.451 Library dl found: YES 00:01:41.451 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:41.451 Run-time dependency json-c found: YES 0.17 00:01:41.451 Run-time dependency cmocka found: YES 1.1.7 00:01:41.451 Program pytest-3 found: NO 00:01:41.451 Program flake8 found: NO 00:01:41.451 Program misspell-fixer found: NO 00:01:41.451 Program restructuredtext-lint found: NO 00:01:41.451 Program valgrind found: YES (/usr/bin/valgrind) 00:01:41.451 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:41.451 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:41.451 Compiler for C supports arguments -Wwrite-strings: YES 00:01:41.451 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:41.451 Program test-lspci.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:41.451 Program test-linkage.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:41.451 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:41.451 Build targets in project: 8 00:01:41.451 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:41.451 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:41.451 00:01:41.451 libvfio-user 0.0.1 00:01:41.451 00:01:41.451 User defined options 00:01:41.451 buildtype : debug 00:01:41.451 default_library: static 00:01:41.451 libdir : /usr/local/lib 00:01:41.451 00:01:41.451 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:41.709 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:41.709 [1/36] Compiling C object samples/lspci.p/lspci.c.o 00:01:41.709 [2/36] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:41.709 [3/36] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:41.709 [4/36] Compiling C object samples/null.p/null.c.o 00:01:41.968 [5/36] Compiling C object lib/libvfio-user.a.p/irq.c.o 00:01:41.968 [6/36] Compiling C object lib/libvfio-user.a.p/pci.c.o 00:01:41.968 [7/36] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:41.968 [8/36] Compiling C object lib/libvfio-user.a.p/tran.c.o 00:01:41.968 [9/36] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:41.968 [10/36] Compiling C object lib/libvfio-user.a.p/migration.c.o 00:01:41.968 [11/36] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:41.968 [12/36] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:41.968 [13/36] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:41.968 [14/36] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:41.968 [15/36] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:41.968 [16/36] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:41.968 [17/36] Compiling C object lib/libvfio-user.a.p/tran_sock.c.o 00:01:41.968 [18/36] Compiling C object lib/libvfio-user.a.p/pci_caps.c.o 00:01:41.968 [19/36] Compiling C object test/unit_tests.p/mocks.c.o 00:01:41.968 [20/36] Compiling C object lib/libvfio-user.a.p/dma.c.o 00:01:41.968 [21/36] Compiling C object samples/server.p/server.c.o 00:01:41.968 [22/36] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:41.968 [23/36] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:41.968 [24/36] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:41.968 [25/36] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:41.968 [26/36] Compiling C object samples/client.p/client.c.o 00:01:41.968 [27/36] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:41.968 [28/36] Compiling C object lib/libvfio-user.a.p/libvfio-user.c.o 00:01:41.968 [29/36] Linking target samples/client 00:01:41.968 [30/36] Linking static target lib/libvfio-user.a 00:01:41.968 [31/36] Linking target test/unit_tests 00:01:41.968 [32/36] Linking target samples/shadow_ioeventfd_server 00:01:41.968 [33/36] Linking target samples/gpio-pci-idio-16 00:01:41.968 [34/36] Linking target samples/null 00:01:41.968 [35/36] Linking target samples/lspci 00:01:41.968 [36/36] Linking target samples/server 00:01:41.968 INFO: autodetecting backend as ninja 00:01:41.968 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:41.968 DESTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:42.295 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:42.295 ninja: no work to do. 00:01:47.586 The Meson build system 00:01:47.586 Version: 1.3.1 00:01:47.586 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk 00:01:47.586 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp 00:01:47.586 Build type: native build 00:01:47.586 Program cat found: YES (/usr/bin/cat) 00:01:47.586 Project name: DPDK 00:01:47.586 Project version: 24.03.0 00:01:47.586 C compiler for the host machine: clang-16 (clang 16.0.6 "clang version 16.0.6 (Fedora 16.0.6-3.fc38)") 00:01:47.586 C linker for the host machine: clang-16 ld.bfd 2.39-16 00:01:47.586 Host machine cpu family: x86_64 00:01:47.586 Host machine cpu: x86_64 00:01:47.586 Message: ## Building in Developer Mode ## 00:01:47.586 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:47.586 Program check-symbols.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:47.586 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:47.586 Program python3 found: YES (/usr/bin/python3) 00:01:47.587 Program cat found: YES (/usr/bin/cat) 00:01:47.587 Compiler for C supports arguments -march=native: YES 00:01:47.587 Checking for size of "void *" : 8 00:01:47.587 Checking for size of "void *" : 8 (cached) 00:01:47.587 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:47.587 Library m found: YES 00:01:47.587 Library numa found: YES 00:01:47.587 Has header "numaif.h" : YES 00:01:47.587 Library fdt found: NO 00:01:47.587 Library execinfo found: NO 00:01:47.587 Has header "execinfo.h" : YES 00:01:47.587 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:47.587 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:47.587 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:47.587 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:47.587 Run-time dependency openssl found: YES 3.0.9 00:01:47.587 Run-time dependency libpcap found: YES 1.10.4 00:01:47.587 Has header "pcap.h" with dependency libpcap: YES 00:01:47.587 Compiler for C supports arguments -Wcast-qual: YES 00:01:47.587 Compiler for C supports arguments -Wdeprecated: YES 00:01:47.587 Compiler for C supports arguments -Wformat: YES 00:01:47.587 Compiler for C supports arguments -Wformat-nonliteral: YES 00:01:47.587 Compiler for C supports arguments -Wformat-security: YES 00:01:47.587 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:47.587 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:47.587 Compiler for C supports arguments -Wnested-externs: YES 00:01:47.587 Compiler for C supports arguments -Wold-style-definition: YES 00:01:47.587 Compiler for C supports arguments -Wpointer-arith: YES 00:01:47.587 Compiler for C supports arguments -Wsign-compare: YES 00:01:47.587 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:47.587 Compiler for C supports arguments -Wundef: YES 00:01:47.587 Compiler for C supports arguments -Wwrite-strings: YES 00:01:47.587 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:47.587 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:01:47.587 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:47.587 Program objdump found: YES (/usr/bin/objdump) 00:01:47.587 Compiler for C supports arguments -mavx512f: YES 00:01:47.587 Checking if "AVX512 checking" compiles: YES 00:01:47.587 Fetching value of define "__SSE4_2__" : 1 00:01:47.587 Fetching value of define "__AES__" : 1 00:01:47.587 Fetching value of define "__AVX__" : 1 00:01:47.587 Fetching value of define "__AVX2__" : 1 00:01:47.587 Fetching value of define "__AVX512BW__" : 1 00:01:47.587 Fetching value of define "__AVX512CD__" : 1 00:01:47.587 Fetching value of define "__AVX512DQ__" : 1 00:01:47.587 Fetching value of define "__AVX512F__" : 1 00:01:47.587 Fetching value of define "__AVX512VL__" : 1 00:01:47.587 Fetching value of define "__PCLMUL__" : 1 00:01:47.587 Fetching value of define "__RDRND__" : 1 00:01:47.587 Fetching value of define "__RDSEED__" : 1 00:01:47.587 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:47.587 Fetching value of define "__znver1__" : (undefined) 00:01:47.587 Fetching value of define "__znver2__" : (undefined) 00:01:47.587 Fetching value of define "__znver3__" : (undefined) 00:01:47.587 Fetching value of define "__znver4__" : (undefined) 00:01:47.587 Compiler for C supports arguments -Wno-format-truncation: NO 00:01:47.587 Message: lib/log: Defining dependency "log" 00:01:47.587 Message: lib/kvargs: Defining dependency "kvargs" 00:01:47.587 Message: lib/telemetry: Defining dependency "telemetry" 00:01:47.587 Checking for function "getentropy" : NO 00:01:47.587 Message: lib/eal: Defining dependency "eal" 00:01:47.587 Message: lib/ring: Defining dependency "ring" 00:01:47.587 Message: lib/rcu: Defining dependency "rcu" 00:01:47.587 Message: lib/mempool: Defining dependency "mempool" 00:01:47.587 Message: lib/mbuf: Defining dependency "mbuf" 00:01:47.587 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:47.587 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:47.587 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:47.587 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:47.587 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:47.587 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:47.587 Compiler for C supports arguments -mpclmul: YES 00:01:47.587 Compiler for C supports arguments -maes: YES 00:01:47.587 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:47.587 Compiler for C supports arguments -mavx512bw: YES 00:01:47.587 Compiler for C supports arguments -mavx512dq: YES 00:01:47.587 Compiler for C supports arguments -mavx512vl: YES 00:01:47.587 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:47.587 Compiler for C supports arguments -mavx2: YES 00:01:47.587 Compiler for C supports arguments -mavx: YES 00:01:47.587 Message: lib/net: Defining dependency "net" 00:01:47.587 Message: lib/meter: Defining dependency "meter" 00:01:47.587 Message: lib/ethdev: Defining dependency "ethdev" 00:01:47.587 Message: lib/pci: Defining dependency "pci" 00:01:47.587 Message: lib/cmdline: Defining dependency "cmdline" 00:01:47.587 Message: lib/hash: Defining dependency "hash" 00:01:47.587 Message: lib/timer: Defining dependency "timer" 00:01:47.587 Message: lib/compressdev: Defining dependency "compressdev" 00:01:47.587 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:47.587 Message: lib/dmadev: Defining dependency "dmadev" 00:01:47.587 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:47.587 Message: lib/power: Defining dependency "power" 00:01:47.587 Message: lib/reorder: Defining dependency "reorder" 00:01:47.587 Message: lib/security: Defining dependency "security" 00:01:47.587 lib/meson.build:163: WARNING: Cannot disable mandatory library "stack" 00:01:47.587 Message: lib/stack: Defining dependency "stack" 00:01:47.587 Has header "linux/userfaultfd.h" : YES 00:01:47.587 Has header "linux/vduse.h" : YES 00:01:47.587 Message: lib/vhost: Defining dependency "vhost" 00:01:47.587 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:01:47.587 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:47.587 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:47.587 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:47.587 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:47.587 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:47.587 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:47.587 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:47.587 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:47.587 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:47.587 Program doxygen found: YES (/usr/bin/doxygen) 00:01:47.587 Configuring doxy-api-html.conf using configuration 00:01:47.587 Configuring doxy-api-man.conf using configuration 00:01:47.587 Program mandb found: YES (/usr/bin/mandb) 00:01:47.587 Program sphinx-build found: NO 00:01:47.587 Configuring rte_build_config.h using configuration 00:01:47.587 Message: 00:01:47.587 ================= 00:01:47.587 Applications Enabled 00:01:47.587 ================= 00:01:47.587 00:01:47.587 apps: 00:01:47.587 00:01:47.587 00:01:47.587 Message: 00:01:47.587 ================= 00:01:47.587 Libraries Enabled 00:01:47.587 ================= 00:01:47.587 00:01:47.587 libs: 00:01:47.587 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:47.587 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:47.587 cryptodev, dmadev, power, reorder, security, stack, vhost, 00:01:47.587 00:01:47.587 Message: 00:01:47.587 =============== 00:01:47.587 Drivers Enabled 00:01:47.587 =============== 00:01:47.587 00:01:47.587 common: 00:01:47.587 00:01:47.587 bus: 00:01:47.587 pci, vdev, 00:01:47.587 mempool: 00:01:47.587 ring, 00:01:47.587 dma: 00:01:47.587 00:01:47.587 net: 00:01:47.587 00:01:47.587 crypto: 00:01:47.587 00:01:47.587 compress: 00:01:47.587 00:01:47.587 vdpa: 00:01:47.587 00:01:47.587 00:01:47.587 Message: 00:01:47.587 ================= 00:01:47.587 Content Skipped 00:01:47.587 ================= 00:01:47.587 00:01:47.587 apps: 00:01:47.587 dumpcap: explicitly disabled via build config 00:01:47.587 graph: explicitly disabled via build config 00:01:47.587 pdump: explicitly disabled via build config 00:01:47.587 proc-info: explicitly disabled via build config 00:01:47.587 test-acl: explicitly disabled via build config 00:01:47.587 test-bbdev: explicitly disabled via build config 00:01:47.587 test-cmdline: explicitly disabled via build config 00:01:47.587 test-compress-perf: explicitly disabled via build config 00:01:47.587 test-crypto-perf: explicitly disabled via build config 00:01:47.587 test-dma-perf: explicitly disabled via build config 00:01:47.587 test-eventdev: explicitly disabled via build config 00:01:47.587 test-fib: explicitly disabled via build config 00:01:47.587 test-flow-perf: explicitly disabled via build config 00:01:47.587 test-gpudev: explicitly disabled via build config 00:01:47.587 test-mldev: explicitly disabled via build config 00:01:47.587 test-pipeline: explicitly disabled via build config 00:01:47.587 test-pmd: explicitly disabled via build config 00:01:47.587 test-regex: explicitly disabled via build config 00:01:47.587 test-sad: explicitly disabled via build config 00:01:47.587 test-security-perf: explicitly disabled via build config 00:01:47.587 00:01:47.587 libs: 00:01:47.587 argparse: explicitly disabled via build config 00:01:47.587 metrics: explicitly disabled via build config 00:01:47.587 acl: explicitly disabled via build config 00:01:47.587 bbdev: explicitly disabled via build config 00:01:47.587 bitratestats: explicitly disabled via build config 00:01:47.587 bpf: explicitly disabled via build config 00:01:47.587 cfgfile: explicitly disabled via build config 00:01:47.587 distributor: explicitly disabled via build config 00:01:47.587 efd: explicitly disabled via build config 00:01:47.587 eventdev: explicitly disabled via build config 00:01:47.587 dispatcher: explicitly disabled via build config 00:01:47.587 gpudev: explicitly disabled via build config 00:01:47.588 gro: explicitly disabled via build config 00:01:47.588 gso: explicitly disabled via build config 00:01:47.588 ip_frag: explicitly disabled via build config 00:01:47.588 jobstats: explicitly disabled via build config 00:01:47.588 latencystats: explicitly disabled via build config 00:01:47.588 lpm: explicitly disabled via build config 00:01:47.588 member: explicitly disabled via build config 00:01:47.588 pcapng: explicitly disabled via build config 00:01:47.588 rawdev: explicitly disabled via build config 00:01:47.588 regexdev: explicitly disabled via build config 00:01:47.588 mldev: explicitly disabled via build config 00:01:47.588 rib: explicitly disabled via build config 00:01:47.588 sched: explicitly disabled via build config 00:01:47.588 ipsec: explicitly disabled via build config 00:01:47.588 pdcp: explicitly disabled via build config 00:01:47.588 fib: explicitly disabled via build config 00:01:47.588 port: explicitly disabled via build config 00:01:47.588 pdump: explicitly disabled via build config 00:01:47.588 table: explicitly disabled via build config 00:01:47.588 pipeline: explicitly disabled via build config 00:01:47.588 graph: explicitly disabled via build config 00:01:47.588 node: explicitly disabled via build config 00:01:47.588 00:01:47.588 drivers: 00:01:47.588 common/cpt: not in enabled drivers build config 00:01:47.588 common/dpaax: not in enabled drivers build config 00:01:47.588 common/iavf: not in enabled drivers build config 00:01:47.588 common/idpf: not in enabled drivers build config 00:01:47.588 common/ionic: not in enabled drivers build config 00:01:47.588 common/mvep: not in enabled drivers build config 00:01:47.588 common/octeontx: not in enabled drivers build config 00:01:47.588 bus/auxiliary: not in enabled drivers build config 00:01:47.588 bus/cdx: not in enabled drivers build config 00:01:47.588 bus/dpaa: not in enabled drivers build config 00:01:47.588 bus/fslmc: not in enabled drivers build config 00:01:47.588 bus/ifpga: not in enabled drivers build config 00:01:47.588 bus/platform: not in enabled drivers build config 00:01:47.588 bus/uacce: not in enabled drivers build config 00:01:47.588 bus/vmbus: not in enabled drivers build config 00:01:47.588 common/cnxk: not in enabled drivers build config 00:01:47.588 common/mlx5: not in enabled drivers build config 00:01:47.588 common/nfp: not in enabled drivers build config 00:01:47.588 common/nitrox: not in enabled drivers build config 00:01:47.588 common/qat: not in enabled drivers build config 00:01:47.588 common/sfc_efx: not in enabled drivers build config 00:01:47.588 mempool/bucket: not in enabled drivers build config 00:01:47.588 mempool/cnxk: not in enabled drivers build config 00:01:47.588 mempool/dpaa: not in enabled drivers build config 00:01:47.588 mempool/dpaa2: not in enabled drivers build config 00:01:47.588 mempool/octeontx: not in enabled drivers build config 00:01:47.588 mempool/stack: not in enabled drivers build config 00:01:47.588 dma/cnxk: not in enabled drivers build config 00:01:47.588 dma/dpaa: not in enabled drivers build config 00:01:47.588 dma/dpaa2: not in enabled drivers build config 00:01:47.588 dma/hisilicon: not in enabled drivers build config 00:01:47.588 dma/idxd: not in enabled drivers build config 00:01:47.588 dma/ioat: not in enabled drivers build config 00:01:47.588 dma/skeleton: not in enabled drivers build config 00:01:47.588 net/af_packet: not in enabled drivers build config 00:01:47.588 net/af_xdp: not in enabled drivers build config 00:01:47.588 net/ark: not in enabled drivers build config 00:01:47.588 net/atlantic: not in enabled drivers build config 00:01:47.588 net/avp: not in enabled drivers build config 00:01:47.588 net/axgbe: not in enabled drivers build config 00:01:47.588 net/bnx2x: not in enabled drivers build config 00:01:47.588 net/bnxt: not in enabled drivers build config 00:01:47.588 net/bonding: not in enabled drivers build config 00:01:47.588 net/cnxk: not in enabled drivers build config 00:01:47.588 net/cpfl: not in enabled drivers build config 00:01:47.588 net/cxgbe: not in enabled drivers build config 00:01:47.588 net/dpaa: not in enabled drivers build config 00:01:47.588 net/dpaa2: not in enabled drivers build config 00:01:47.588 net/e1000: not in enabled drivers build config 00:01:47.588 net/ena: not in enabled drivers build config 00:01:47.588 net/enetc: not in enabled drivers build config 00:01:47.588 net/enetfec: not in enabled drivers build config 00:01:47.588 net/enic: not in enabled drivers build config 00:01:47.588 net/failsafe: not in enabled drivers build config 00:01:47.588 net/fm10k: not in enabled drivers build config 00:01:47.588 net/gve: not in enabled drivers build config 00:01:47.588 net/hinic: not in enabled drivers build config 00:01:47.588 net/hns3: not in enabled drivers build config 00:01:47.588 net/i40e: not in enabled drivers build config 00:01:47.588 net/iavf: not in enabled drivers build config 00:01:47.588 net/ice: not in enabled drivers build config 00:01:47.588 net/idpf: not in enabled drivers build config 00:01:47.588 net/igc: not in enabled drivers build config 00:01:47.588 net/ionic: not in enabled drivers build config 00:01:47.588 net/ipn3ke: not in enabled drivers build config 00:01:47.588 net/ixgbe: not in enabled drivers build config 00:01:47.588 net/mana: not in enabled drivers build config 00:01:47.588 net/memif: not in enabled drivers build config 00:01:47.588 net/mlx4: not in enabled drivers build config 00:01:47.588 net/mlx5: not in enabled drivers build config 00:01:47.588 net/mvneta: not in enabled drivers build config 00:01:47.588 net/mvpp2: not in enabled drivers build config 00:01:47.588 net/netvsc: not in enabled drivers build config 00:01:47.588 net/nfb: not in enabled drivers build config 00:01:47.588 net/nfp: not in enabled drivers build config 00:01:47.588 net/ngbe: not in enabled drivers build config 00:01:47.588 net/null: not in enabled drivers build config 00:01:47.588 net/octeontx: not in enabled drivers build config 00:01:47.588 net/octeon_ep: not in enabled drivers build config 00:01:47.588 net/pcap: not in enabled drivers build config 00:01:47.588 net/pfe: not in enabled drivers build config 00:01:47.588 net/qede: not in enabled drivers build config 00:01:47.588 net/ring: not in enabled drivers build config 00:01:47.588 net/sfc: not in enabled drivers build config 00:01:47.588 net/softnic: not in enabled drivers build config 00:01:47.588 net/tap: not in enabled drivers build config 00:01:47.588 net/thunderx: not in enabled drivers build config 00:01:47.588 net/txgbe: not in enabled drivers build config 00:01:47.588 net/vdev_netvsc: not in enabled drivers build config 00:01:47.588 net/vhost: not in enabled drivers build config 00:01:47.588 net/virtio: not in enabled drivers build config 00:01:47.588 net/vmxnet3: not in enabled drivers build config 00:01:47.588 raw/*: missing internal dependency, "rawdev" 00:01:47.588 crypto/armv8: not in enabled drivers build config 00:01:47.588 crypto/bcmfs: not in enabled drivers build config 00:01:47.588 crypto/caam_jr: not in enabled drivers build config 00:01:47.588 crypto/ccp: not in enabled drivers build config 00:01:47.588 crypto/cnxk: not in enabled drivers build config 00:01:47.588 crypto/dpaa_sec: not in enabled drivers build config 00:01:47.588 crypto/dpaa2_sec: not in enabled drivers build config 00:01:47.588 crypto/ipsec_mb: not in enabled drivers build config 00:01:47.588 crypto/mlx5: not in enabled drivers build config 00:01:47.588 crypto/mvsam: not in enabled drivers build config 00:01:47.588 crypto/nitrox: not in enabled drivers build config 00:01:47.588 crypto/null: not in enabled drivers build config 00:01:47.588 crypto/octeontx: not in enabled drivers build config 00:01:47.588 crypto/openssl: not in enabled drivers build config 00:01:47.588 crypto/scheduler: not in enabled drivers build config 00:01:47.588 crypto/uadk: not in enabled drivers build config 00:01:47.588 crypto/virtio: not in enabled drivers build config 00:01:47.588 compress/isal: not in enabled drivers build config 00:01:47.588 compress/mlx5: not in enabled drivers build config 00:01:47.588 compress/nitrox: not in enabled drivers build config 00:01:47.588 compress/octeontx: not in enabled drivers build config 00:01:47.588 compress/zlib: not in enabled drivers build config 00:01:47.588 regex/*: missing internal dependency, "regexdev" 00:01:47.588 ml/*: missing internal dependency, "mldev" 00:01:47.588 vdpa/ifc: not in enabled drivers build config 00:01:47.588 vdpa/mlx5: not in enabled drivers build config 00:01:47.588 vdpa/nfp: not in enabled drivers build config 00:01:47.588 vdpa/sfc: not in enabled drivers build config 00:01:47.588 event/*: missing internal dependency, "eventdev" 00:01:47.588 baseband/*: missing internal dependency, "bbdev" 00:01:47.588 gpu/*: missing internal dependency, "gpudev" 00:01:47.588 00:01:47.588 00:01:47.588 Build targets in project: 88 00:01:47.588 00:01:47.588 DPDK 24.03.0 00:01:47.588 00:01:47.588 User defined options 00:01:47.588 buildtype : debug 00:01:47.588 default_library : static 00:01:47.588 libdir : lib 00:01:47.588 prefix : /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:47.588 c_args : -fPIC -Werror 00:01:47.588 c_link_args : 00:01:47.588 cpu_instruction_set: native 00:01:47.588 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:01:47.588 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,argparse,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:01:47.588 enable_docs : false 00:01:47.588 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:47.588 enable_kmods : false 00:01:47.588 tests : false 00:01:47.588 00:01:47.588 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:47.588 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp' 00:01:47.589 [1/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:47.589 [2/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:47.589 [3/274] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:47.589 [4/274] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:47.589 [5/274] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:47.589 [6/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:47.589 [7/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:47.589 [8/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:47.589 [9/274] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:47.589 [10/274] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:47.589 [11/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:47.589 [12/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:47.589 [13/274] Linking static target lib/librte_kvargs.a 00:01:47.589 [14/274] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:47.589 [15/274] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:47.589 [16/274] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:47.589 [17/274] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:47.589 [18/274] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:47.589 [19/274] Linking static target lib/librte_log.a 00:01:47.847 [20/274] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.107 [21/274] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:48.107 [22/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:48.107 [23/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:48.107 [24/274] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:48.107 [25/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:48.107 [26/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:48.107 [27/274] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:48.107 [28/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:48.107 [29/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:48.107 [30/274] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:48.107 [31/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:48.107 [32/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:48.107 [33/274] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:48.107 [34/274] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:48.107 [35/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:48.107 [36/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:48.107 [37/274] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:48.107 [38/274] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:48.107 [39/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:48.107 [40/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:48.107 [41/274] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:48.107 [42/274] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:48.107 [43/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:48.107 [44/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:48.107 [45/274] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:48.107 [46/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:48.107 [47/274] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:48.107 [48/274] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:48.107 [49/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:48.107 [50/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:48.107 [51/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:48.107 [52/274] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:48.107 [53/274] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:48.107 [54/274] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:48.107 [55/274] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:48.107 [56/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:48.107 [57/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:48.107 [58/274] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:48.107 [59/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:48.107 [60/274] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:48.107 [61/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:48.107 [62/274] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:48.107 [63/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:48.107 [64/274] Linking static target lib/librte_telemetry.a 00:01:48.107 [65/274] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:48.107 [66/274] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:48.107 [67/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:48.107 [68/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:48.107 [69/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:48.107 [70/274] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:48.107 [71/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:48.107 [72/274] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:48.107 [73/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:48.107 [74/274] Linking static target lib/librte_pci.a 00:01:48.107 [75/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:48.107 [76/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:48.107 [77/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:48.107 [78/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:48.107 [79/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:48.107 [80/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:48.108 [81/274] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:48.108 [82/274] Linking static target lib/librte_meter.a 00:01:48.108 [83/274] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:48.108 [84/274] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:48.108 [85/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:48.108 [86/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:48.108 [87/274] Linking static target lib/librte_ring.a 00:01:48.108 [88/274] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.108 [89/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:48.108 [90/274] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:48.108 [91/274] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:48.108 [92/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:48.108 [93/274] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:48.108 [94/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:48.108 [95/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:48.108 [96/274] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:48.108 [97/274] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:48.108 [98/274] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:48.108 [99/274] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:48.108 [100/274] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:48.108 [101/274] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:48.108 [102/274] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:48.108 [103/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:48.108 [104/274] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:48.108 [105/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:48.108 [106/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:48.108 [107/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:48.108 [108/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:48.108 [109/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:48.108 [110/274] Linking target lib/librte_log.so.24.1 00:01:48.366 [111/274] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:48.366 [112/274] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:48.366 [113/274] Linking static target lib/librte_eal.a 00:01:48.366 [114/274] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:48.366 [115/274] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:48.366 [116/274] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:48.366 [117/274] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:48.366 [118/274] Linking static target lib/librte_net.a 00:01:48.366 [119/274] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:48.366 [120/274] Linking static target lib/librte_rcu.a 00:01:48.366 [121/274] Linking static target lib/librte_mempool.a 00:01:48.366 [122/274] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:48.366 [123/274] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:48.366 [124/274] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:48.366 [125/274] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:48.366 [126/274] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:48.366 [127/274] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:48.366 [128/274] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:48.366 [129/274] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:48.366 [130/274] Linking static target lib/librte_mbuf.a 00:01:48.366 [131/274] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:48.366 [132/274] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.366 [133/274] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.366 [134/274] Linking target lib/librte_kvargs.so.24.1 00:01:48.366 [135/274] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.366 [136/274] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:48.624 [137/274] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:48.624 [138/274] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.624 [139/274] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:48.624 [140/274] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.624 [141/274] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:48.624 [142/274] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.624 [143/274] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:48.624 [144/274] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:48.624 [145/274] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:48.624 [146/274] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:48.624 [147/274] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:48.624 [148/274] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:48.624 [149/274] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:48.624 [150/274] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:48.624 [151/274] Linking target lib/librte_telemetry.so.24.1 00:01:48.624 [152/274] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:48.624 [153/274] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:48.624 [154/274] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:48.624 [155/274] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:48.624 [156/274] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:48.624 [157/274] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:48.624 [158/274] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:48.624 [159/274] Linking static target lib/librte_reorder.a 00:01:48.624 [160/274] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:48.624 [161/274] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:48.624 [162/274] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:48.624 [163/274] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:48.624 [164/274] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:48.624 [165/274] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:48.624 [166/274] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:48.624 [167/274] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:48.624 [168/274] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:48.624 [169/274] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:48.624 [170/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:48.624 [171/274] Linking static target lib/librte_timer.a 00:01:48.624 [172/274] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:48.625 [173/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:48.625 [174/274] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:48.625 [175/274] Linking static target lib/librte_compressdev.a 00:01:48.625 [176/274] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:48.625 [177/274] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:48.625 [178/274] Linking static target lib/librte_stack.a 00:01:48.625 [179/274] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:48.625 [180/274] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:48.625 [181/274] Linking static target lib/librte_cmdline.a 00:01:48.625 [182/274] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:48.883 [183/274] Linking static target lib/librte_security.a 00:01:48.883 [184/274] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:48.883 [185/274] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:48.883 [186/274] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:48.883 [187/274] Linking static target lib/librte_dmadev.a 00:01:48.883 [188/274] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:48.883 [189/274] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:48.883 [190/274] Linking static target lib/librte_hash.a 00:01:48.883 [191/274] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:48.883 [192/274] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:48.883 [193/274] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:48.883 [194/274] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:48.883 [195/274] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:48.883 [196/274] Linking static target lib/librte_power.a 00:01:48.883 [197/274] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:48.883 [198/274] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:48.883 [199/274] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:48.883 [200/274] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:48.883 [201/274] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:48.883 [202/274] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:48.883 [203/274] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.883 [204/274] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:48.883 [205/274] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:48.883 [206/274] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.883 [207/274] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:48.883 [208/274] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:48.883 [209/274] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:49.141 [210/274] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.141 [211/274] Linking static target drivers/librte_bus_vdev.a 00:01:49.141 [212/274] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:49.141 [213/274] Linking static target lib/librte_ethdev.a 00:01:49.141 [214/274] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:49.141 [215/274] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:49.141 [216/274] Linking static target lib/librte_cryptodev.a 00:01:49.141 [217/274] Linking static target drivers/librte_bus_pci.a 00:01:49.141 [218/274] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:49.141 [219/274] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.141 [220/274] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:49.141 [221/274] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:49.141 [222/274] Linking static target drivers/librte_mempool_ring.a 00:01:49.141 [223/274] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.141 [224/274] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:49.398 [225/274] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.398 [226/274] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.398 [227/274] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.398 [228/274] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.656 [229/274] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.656 [230/274] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.657 [231/274] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.914 [232/274] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.914 [233/274] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:49.914 [234/274] Linking static target lib/librte_vhost.a 00:01:50.847 [235/274] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.777 [236/274] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.036 [237/274] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.605 [238/274] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.605 [239/274] Linking target lib/librte_eal.so.24.1 00:01:57.605 [240/274] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:57.605 [241/274] Linking target lib/librte_timer.so.24.1 00:01:57.605 [242/274] Linking target lib/librte_ring.so.24.1 00:01:57.605 [243/274] Linking target drivers/librte_bus_vdev.so.24.1 00:01:57.605 [244/274] Linking target lib/librte_dmadev.so.24.1 00:01:57.605 [245/274] Linking target lib/librte_meter.so.24.1 00:01:57.605 [246/274] Linking target lib/librte_stack.so.24.1 00:01:57.605 [247/274] Linking target lib/librte_pci.so.24.1 00:01:57.864 [248/274] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:57.864 [249/274] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:57.864 [250/274] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:57.864 [251/274] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:57.864 [252/274] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:57.865 [253/274] Linking target lib/librte_rcu.so.24.1 00:01:57.865 [254/274] Linking target lib/librte_mempool.so.24.1 00:01:57.865 [255/274] Linking target drivers/librte_bus_pci.so.24.1 00:01:57.865 [256/274] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:57.865 [257/274] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:58.123 [258/274] Linking target lib/librte_mbuf.so.24.1 00:01:58.123 [259/274] Linking target drivers/librte_mempool_ring.so.24.1 00:01:58.123 [260/274] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:58.123 [261/274] Linking target lib/librte_compressdev.so.24.1 00:01:58.123 [262/274] Linking target lib/librte_net.so.24.1 00:01:58.123 [263/274] Linking target lib/librte_cryptodev.so.24.1 00:01:58.123 [264/274] Linking target lib/librte_reorder.so.24.1 00:01:58.382 [265/274] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:58.382 [266/274] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:58.382 [267/274] Linking target lib/librte_cmdline.so.24.1 00:01:58.382 [268/274] Linking target lib/librte_hash.so.24.1 00:01:58.382 [269/274] Linking target lib/librte_ethdev.so.24.1 00:01:58.382 [270/274] Linking target lib/librte_security.so.24.1 00:01:58.639 [271/274] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:58.639 [272/274] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:58.639 [273/274] Linking target lib/librte_power.so.24.1 00:01:58.639 [274/274] Linking target lib/librte_vhost.so.24.1 00:01:58.639 INFO: autodetecting backend as ninja 00:01:58.639 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp -j 88 00:01:59.575 CC lib/ut/ut.o 00:01:59.575 CC lib/ut_mock/mock.o 00:01:59.575 CC lib/log/log.o 00:01:59.575 CC lib/log/log_flags.o 00:01:59.575 CC lib/log/log_deprecated.o 00:01:59.575 LIB libspdk_ut.a 00:01:59.575 LIB libspdk_ut_mock.a 00:01:59.575 LIB libspdk_log.a 00:01:59.833 CXX lib/trace_parser/trace.o 00:01:59.833 CC lib/util/base64.o 00:01:59.833 CC lib/util/bit_array.o 00:01:59.833 CC lib/util/cpuset.o 00:01:59.833 CC lib/util/crc16.o 00:01:59.833 CC lib/util/crc32.o 00:01:59.833 CC lib/util/crc32_ieee.o 00:01:59.833 CC lib/util/crc64.o 00:01:59.833 CC lib/util/crc32c.o 00:01:59.833 CC lib/util/dif.o 00:01:59.833 CC lib/util/fd.o 00:01:59.833 CC lib/util/file.o 00:01:59.833 CC lib/util/iov.o 00:01:59.833 CC lib/util/hexlify.o 00:01:59.833 CC lib/util/math.o 00:01:59.833 CC lib/util/pipe.o 00:01:59.833 CC lib/util/strerror_tls.o 00:01:59.833 CC lib/ioat/ioat.o 00:01:59.833 CC lib/util/string.o 00:01:59.833 CC lib/util/fd_group.o 00:01:59.833 CC lib/dma/dma.o 00:01:59.833 CC lib/util/uuid.o 00:01:59.833 CC lib/util/xor.o 00:01:59.833 CC lib/util/zipf.o 00:01:59.833 CC lib/vfio_user/host/vfio_user_pci.o 00:01:59.833 CC lib/vfio_user/host/vfio_user.o 00:02:00.091 LIB libspdk_dma.a 00:02:00.091 LIB libspdk_ioat.a 00:02:00.091 LIB libspdk_vfio_user.a 00:02:00.091 LIB libspdk_util.a 00:02:00.349 LIB libspdk_trace_parser.a 00:02:00.349 CC lib/vmd/led.o 00:02:00.349 CC lib/vmd/vmd.o 00:02:00.349 CC lib/conf/conf.o 00:02:00.349 CC lib/json/json_util.o 00:02:00.349 CC lib/json/json_parse.o 00:02:00.349 CC lib/json/json_write.o 00:02:00.349 CC lib/idxd/idxd.o 00:02:00.349 CC lib/idxd/idxd_user.o 00:02:00.349 CC lib/rdma/common.o 00:02:00.349 CC lib/rdma/rdma_verbs.o 00:02:00.349 CC lib/env_dpdk/env.o 00:02:00.349 CC lib/env_dpdk/memory.o 00:02:00.349 CC lib/env_dpdk/pci.o 00:02:00.349 CC lib/env_dpdk/init.o 00:02:00.349 CC lib/env_dpdk/threads.o 00:02:00.349 CC lib/env_dpdk/pci_ioat.o 00:02:00.349 CC lib/env_dpdk/pci_virtio.o 00:02:00.349 CC lib/env_dpdk/pci_vmd.o 00:02:00.349 CC lib/env_dpdk/pci_idxd.o 00:02:00.349 CC lib/env_dpdk/pci_event.o 00:02:00.349 CC lib/env_dpdk/sigbus_handler.o 00:02:00.349 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:00.349 CC lib/env_dpdk/pci_dpdk.o 00:02:00.349 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:00.607 LIB libspdk_conf.a 00:02:00.607 LIB libspdk_json.a 00:02:00.607 LIB libspdk_rdma.a 00:02:00.866 LIB libspdk_idxd.a 00:02:00.866 LIB libspdk_vmd.a 00:02:00.866 CC lib/jsonrpc/jsonrpc_server.o 00:02:00.866 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:00.866 CC lib/jsonrpc/jsonrpc_client.o 00:02:00.866 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:01.125 LIB libspdk_jsonrpc.a 00:02:01.384 CC lib/rpc/rpc.o 00:02:01.384 LIB libspdk_env_dpdk.a 00:02:01.384 LIB libspdk_rpc.a 00:02:01.643 CC lib/trace/trace.o 00:02:01.643 CC lib/trace/trace_flags.o 00:02:01.643 CC lib/trace/trace_rpc.o 00:02:01.643 CC lib/keyring/keyring.o 00:02:01.643 CC lib/keyring/keyring_rpc.o 00:02:01.643 CC lib/notify/notify.o 00:02:01.643 CC lib/notify/notify_rpc.o 00:02:01.903 LIB libspdk_notify.a 00:02:01.903 LIB libspdk_trace.a 00:02:01.903 LIB libspdk_keyring.a 00:02:02.161 CC lib/sock/sock.o 00:02:02.161 CC lib/sock/sock_rpc.o 00:02:02.161 CC lib/thread/thread.o 00:02:02.161 CC lib/thread/iobuf.o 00:02:02.420 LIB libspdk_sock.a 00:02:02.678 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:02.678 CC lib/nvme/nvme_ctrlr.o 00:02:02.678 CC lib/nvme/nvme_fabric.o 00:02:02.678 CC lib/nvme/nvme_ns_cmd.o 00:02:02.678 CC lib/nvme/nvme_ns.o 00:02:02.678 CC lib/nvme/nvme_pcie_common.o 00:02:02.678 CC lib/nvme/nvme_pcie.o 00:02:02.678 CC lib/nvme/nvme.o 00:02:02.678 CC lib/nvme/nvme_qpair.o 00:02:02.678 CC lib/nvme/nvme_transport.o 00:02:02.678 CC lib/nvme/nvme_discovery.o 00:02:02.678 CC lib/nvme/nvme_quirks.o 00:02:02.678 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:02.678 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:02.678 CC lib/nvme/nvme_tcp.o 00:02:02.678 CC lib/nvme/nvme_opal.o 00:02:02.678 CC lib/nvme/nvme_io_msg.o 00:02:02.678 CC lib/nvme/nvme_poll_group.o 00:02:02.678 CC lib/nvme/nvme_zns.o 00:02:02.678 CC lib/nvme/nvme_stubs.o 00:02:02.678 CC lib/nvme/nvme_cuse.o 00:02:02.678 CC lib/nvme/nvme_auth.o 00:02:02.678 CC lib/nvme/nvme_vfio_user.o 00:02:02.678 CC lib/nvme/nvme_rdma.o 00:02:02.935 LIB libspdk_thread.a 00:02:03.199 CC lib/virtio/virtio.o 00:02:03.199 CC lib/blob/request.o 00:02:03.199 CC lib/blob/blobstore.o 00:02:03.199 CC lib/virtio/virtio_vhost_user.o 00:02:03.199 CC lib/virtio/virtio_vfio_user.o 00:02:03.199 CC lib/virtio/virtio_pci.o 00:02:03.199 CC lib/blob/zeroes.o 00:02:03.199 CC lib/blob/blob_bs_dev.o 00:02:03.199 CC lib/accel/accel_sw.o 00:02:03.199 CC lib/accel/accel.o 00:02:03.199 CC lib/accel/accel_rpc.o 00:02:03.199 CC lib/vfu_tgt/tgt_endpoint.o 00:02:03.199 CC lib/vfu_tgt/tgt_rpc.o 00:02:03.199 CC lib/init/json_config.o 00:02:03.199 CC lib/init/subsystem.o 00:02:03.199 CC lib/init/subsystem_rpc.o 00:02:03.199 CC lib/init/rpc.o 00:02:03.457 LIB libspdk_init.a 00:02:03.457 LIB libspdk_virtio.a 00:02:03.457 LIB libspdk_vfu_tgt.a 00:02:03.714 CC lib/event/app.o 00:02:03.714 CC lib/event/log_rpc.o 00:02:03.714 CC lib/event/reactor.o 00:02:03.714 CC lib/event/app_rpc.o 00:02:03.714 CC lib/event/scheduler_static.o 00:02:03.973 LIB libspdk_event.a 00:02:03.973 LIB libspdk_accel.a 00:02:03.973 LIB libspdk_nvme.a 00:02:04.231 CC lib/bdev/bdev.o 00:02:04.231 CC lib/bdev/bdev_rpc.o 00:02:04.231 CC lib/bdev/bdev_zone.o 00:02:04.231 CC lib/bdev/part.o 00:02:04.231 CC lib/bdev/scsi_nvme.o 00:02:05.167 LIB libspdk_blob.a 00:02:05.167 CC lib/blobfs/tree.o 00:02:05.167 CC lib/blobfs/blobfs.o 00:02:05.167 CC lib/lvol/lvol.o 00:02:05.733 LIB libspdk_lvol.a 00:02:05.733 LIB libspdk_blobfs.a 00:02:05.992 LIB libspdk_bdev.a 00:02:06.251 CC lib/scsi/dev.o 00:02:06.251 CC lib/ftl/ftl_core.o 00:02:06.251 CC lib/scsi/lun.o 00:02:06.251 CC lib/ftl/ftl_init.o 00:02:06.251 CC lib/scsi/port.o 00:02:06.251 CC lib/scsi/scsi.o 00:02:06.251 CC lib/ftl/ftl_layout.o 00:02:06.251 CC lib/nvmf/ctrlr.o 00:02:06.251 CC lib/scsi/scsi_bdev.o 00:02:06.251 CC lib/ftl/ftl_debug.o 00:02:06.251 CC lib/scsi/scsi_rpc.o 00:02:06.251 CC lib/scsi/scsi_pr.o 00:02:06.251 CC lib/ftl/ftl_io.o 00:02:06.251 CC lib/nvmf/ctrlr_discovery.o 00:02:06.251 CC lib/scsi/task.o 00:02:06.251 CC lib/ftl/ftl_l2p.o 00:02:06.251 CC lib/nvmf/ctrlr_bdev.o 00:02:06.251 CC lib/ftl/ftl_sb.o 00:02:06.251 CC lib/nvmf/subsystem.o 00:02:06.251 CC lib/nvmf/nvmf.o 00:02:06.251 CC lib/nvmf/nvmf_rpc.o 00:02:06.251 CC lib/ftl/ftl_nv_cache.o 00:02:06.251 CC lib/ftl/ftl_l2p_flat.o 00:02:06.251 CC lib/nvmf/transport.o 00:02:06.251 CC lib/nvmf/tcp.o 00:02:06.251 CC lib/nvmf/stubs.o 00:02:06.251 CC lib/ftl/ftl_band.o 00:02:06.251 CC lib/ftl/ftl_band_ops.o 00:02:06.251 CC lib/nvmf/mdns_server.o 00:02:06.251 CC lib/ftl/ftl_writer.o 00:02:06.251 CC lib/nvmf/vfio_user.o 00:02:06.251 CC lib/ftl/ftl_rq.o 00:02:06.251 CC lib/nvmf/rdma.o 00:02:06.251 CC lib/ftl/ftl_reloc.o 00:02:06.251 CC lib/ftl/ftl_l2p_cache.o 00:02:06.251 CC lib/nvmf/auth.o 00:02:06.251 CC lib/ftl/ftl_p2l.o 00:02:06.251 CC lib/ublk/ublk.o 00:02:06.251 CC lib/ftl/mngt/ftl_mngt.o 00:02:06.251 CC lib/ublk/ublk_rpc.o 00:02:06.251 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:06.251 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:06.251 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:06.251 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:06.251 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:06.251 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:06.251 CC lib/nbd/nbd.o 00:02:06.251 CC lib/nbd/nbd_rpc.o 00:02:06.251 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:06.251 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:06.251 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:06.251 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:06.251 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:06.251 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:06.251 CC lib/ftl/utils/ftl_conf.o 00:02:06.251 CC lib/ftl/utils/ftl_md.o 00:02:06.251 CC lib/ftl/utils/ftl_mempool.o 00:02:06.251 CC lib/ftl/utils/ftl_bitmap.o 00:02:06.251 CC lib/ftl/utils/ftl_property.o 00:02:06.251 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:06.251 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:06.251 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:06.251 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:06.251 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:06.251 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:06.251 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:06.251 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:06.251 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:06.251 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:06.251 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:06.251 CC lib/ftl/base/ftl_base_dev.o 00:02:06.251 CC lib/ftl/base/ftl_base_bdev.o 00:02:06.251 CC lib/ftl/ftl_trace.o 00:02:06.510 LIB libspdk_nbd.a 00:02:06.772 LIB libspdk_scsi.a 00:02:06.772 LIB libspdk_ublk.a 00:02:07.029 LIB libspdk_ftl.a 00:02:07.029 CC lib/iscsi/conn.o 00:02:07.029 CC lib/iscsi/init_grp.o 00:02:07.029 CC lib/iscsi/iscsi.o 00:02:07.029 CC lib/iscsi/md5.o 00:02:07.029 CC lib/iscsi/portal_grp.o 00:02:07.029 CC lib/iscsi/param.o 00:02:07.029 CC lib/iscsi/tgt_node.o 00:02:07.029 CC lib/iscsi/iscsi_subsystem.o 00:02:07.029 CC lib/iscsi/iscsi_rpc.o 00:02:07.029 CC lib/iscsi/task.o 00:02:07.029 CC lib/vhost/vhost_scsi.o 00:02:07.029 CC lib/vhost/vhost.o 00:02:07.029 CC lib/vhost/vhost_rpc.o 00:02:07.029 CC lib/vhost/vhost_blk.o 00:02:07.029 CC lib/vhost/rte_vhost_user.o 00:02:07.597 LIB libspdk_nvmf.a 00:02:07.597 LIB libspdk_vhost.a 00:02:07.906 LIB libspdk_iscsi.a 00:02:08.163 CC module/vfu_device/vfu_virtio_blk.o 00:02:08.163 CC module/vfu_device/vfu_virtio.o 00:02:08.163 CC module/vfu_device/vfu_virtio_scsi.o 00:02:08.163 CC module/vfu_device/vfu_virtio_rpc.o 00:02:08.163 CC module/env_dpdk/env_dpdk_rpc.o 00:02:08.163 LIB libspdk_env_dpdk_rpc.a 00:02:08.163 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:08.163 CC module/sock/posix/posix.o 00:02:08.163 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:08.163 CC module/blob/bdev/blob_bdev.o 00:02:08.421 CC module/accel/iaa/accel_iaa.o 00:02:08.421 CC module/keyring/file/keyring.o 00:02:08.421 CC module/accel/iaa/accel_iaa_rpc.o 00:02:08.421 CC module/accel/error/accel_error.o 00:02:08.421 CC module/keyring/file/keyring_rpc.o 00:02:08.421 CC module/accel/ioat/accel_ioat.o 00:02:08.421 CC module/accel/error/accel_error_rpc.o 00:02:08.421 CC module/accel/ioat/accel_ioat_rpc.o 00:02:08.421 CC module/scheduler/gscheduler/gscheduler.o 00:02:08.421 CC module/accel/dsa/accel_dsa.o 00:02:08.421 CC module/accel/dsa/accel_dsa_rpc.o 00:02:08.421 LIB libspdk_scheduler_dpdk_governor.a 00:02:08.421 LIB libspdk_keyring_file.a 00:02:08.421 LIB libspdk_scheduler_dynamic.a 00:02:08.421 LIB libspdk_scheduler_gscheduler.a 00:02:08.421 LIB libspdk_accel_error.a 00:02:08.421 LIB libspdk_accel_ioat.a 00:02:08.421 LIB libspdk_accel_iaa.a 00:02:08.421 LIB libspdk_blob_bdev.a 00:02:08.421 LIB libspdk_accel_dsa.a 00:02:08.679 LIB libspdk_vfu_device.a 00:02:08.679 LIB libspdk_sock_posix.a 00:02:08.679 CC module/bdev/split/vbdev_split_rpc.o 00:02:08.679 CC module/bdev/split/vbdev_split.o 00:02:08.937 CC module/bdev/nvme/bdev_nvme.o 00:02:08.937 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:08.937 CC module/bdev/nvme/nvme_rpc.o 00:02:08.937 CC module/bdev/nvme/bdev_mdns_client.o 00:02:08.937 CC module/bdev/nvme/vbdev_opal.o 00:02:08.937 CC module/bdev/delay/vbdev_delay.o 00:02:08.937 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:08.937 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:08.937 CC module/bdev/raid/bdev_raid_rpc.o 00:02:08.937 CC module/bdev/raid/bdev_raid_sb.o 00:02:08.937 CC module/bdev/raid/bdev_raid.o 00:02:08.937 CC module/bdev/raid/raid0.o 00:02:08.937 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:08.937 CC module/bdev/aio/bdev_aio.o 00:02:08.937 CC module/bdev/error/vbdev_error_rpc.o 00:02:08.937 CC module/bdev/error/vbdev_error.o 00:02:08.937 CC module/bdev/lvol/vbdev_lvol.o 00:02:08.937 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:08.937 CC module/bdev/raid/concat.o 00:02:08.937 CC module/bdev/raid/raid1.o 00:02:08.937 CC module/bdev/aio/bdev_aio_rpc.o 00:02:08.937 CC module/bdev/passthru/vbdev_passthru.o 00:02:08.937 CC module/bdev/malloc/bdev_malloc.o 00:02:08.937 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:08.937 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:08.937 CC module/bdev/ftl/bdev_ftl.o 00:02:08.937 CC module/bdev/iscsi/bdev_iscsi.o 00:02:08.937 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:08.937 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:08.937 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:08.937 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:08.937 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:08.937 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:08.937 CC module/bdev/null/bdev_null.o 00:02:08.937 CC module/bdev/null/bdev_null_rpc.o 00:02:08.937 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:08.937 CC module/bdev/gpt/gpt.o 00:02:08.937 CC module/blobfs/bdev/blobfs_bdev.o 00:02:08.937 CC module/bdev/gpt/vbdev_gpt.o 00:02:08.937 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:08.937 LIB libspdk_bdev_split.a 00:02:08.937 LIB libspdk_blobfs_bdev.a 00:02:08.937 LIB libspdk_bdev_error.a 00:02:08.937 LIB libspdk_bdev_null.a 00:02:08.937 LIB libspdk_bdev_gpt.a 00:02:08.937 LIB libspdk_bdev_ftl.a 00:02:09.196 LIB libspdk_bdev_passthru.a 00:02:09.196 LIB libspdk_bdev_aio.a 00:02:09.196 LIB libspdk_bdev_iscsi.a 00:02:09.196 LIB libspdk_bdev_zone_block.a 00:02:09.196 LIB libspdk_bdev_delay.a 00:02:09.196 LIB libspdk_bdev_malloc.a 00:02:09.196 LIB libspdk_bdev_lvol.a 00:02:09.196 LIB libspdk_bdev_virtio.a 00:02:09.455 LIB libspdk_bdev_raid.a 00:02:10.023 LIB libspdk_bdev_nvme.a 00:02:10.607 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:10.607 CC module/event/subsystems/sock/sock.o 00:02:10.607 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:10.607 CC module/event/subsystems/vmd/vmd.o 00:02:10.607 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:10.607 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:10.607 CC module/event/subsystems/iobuf/iobuf.o 00:02:10.607 CC module/event/subsystems/keyring/keyring.o 00:02:10.607 CC module/event/subsystems/scheduler/scheduler.o 00:02:10.607 LIB libspdk_event_vhost_blk.a 00:02:10.607 LIB libspdk_event_vfu_tgt.a 00:02:10.871 LIB libspdk_event_sock.a 00:02:10.871 LIB libspdk_event_keyring.a 00:02:10.871 LIB libspdk_event_vmd.a 00:02:10.871 LIB libspdk_event_scheduler.a 00:02:10.871 LIB libspdk_event_iobuf.a 00:02:10.871 CC module/event/subsystems/accel/accel.o 00:02:11.134 LIB libspdk_event_accel.a 00:02:11.393 CC module/event/subsystems/bdev/bdev.o 00:02:11.393 LIB libspdk_event_bdev.a 00:02:11.653 CC module/event/subsystems/ublk/ublk.o 00:02:11.653 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:11.653 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:11.653 CC module/event/subsystems/scsi/scsi.o 00:02:11.653 CC module/event/subsystems/nbd/nbd.o 00:02:11.912 LIB libspdk_event_ublk.a 00:02:11.912 LIB libspdk_event_nbd.a 00:02:11.912 LIB libspdk_event_scsi.a 00:02:11.912 LIB libspdk_event_nvmf.a 00:02:12.170 CC module/event/subsystems/iscsi/iscsi.o 00:02:12.170 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:12.170 LIB libspdk_event_vhost_scsi.a 00:02:12.170 LIB libspdk_event_iscsi.a 00:02:12.440 CC app/trace_record/trace_record.o 00:02:12.440 CC app/spdk_nvme_perf/perf.o 00:02:12.440 CC app/spdk_nvme_discover/discovery_aer.o 00:02:12.440 CC app/spdk_lspci/spdk_lspci.o 00:02:12.440 CC app/spdk_nvme_identify/identify.o 00:02:12.440 CXX app/trace/trace.o 00:02:12.440 CC app/spdk_top/spdk_top.o 00:02:12.440 TEST_HEADER include/spdk/accel.h 00:02:12.440 TEST_HEADER include/spdk/accel_module.h 00:02:12.440 TEST_HEADER include/spdk/assert.h 00:02:12.440 CC test/rpc_client/rpc_client_test.o 00:02:12.440 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:12.440 TEST_HEADER include/spdk/barrier.h 00:02:12.440 TEST_HEADER include/spdk/base64.h 00:02:12.440 TEST_HEADER include/spdk/bdev.h 00:02:12.440 TEST_HEADER include/spdk/bdev_module.h 00:02:12.440 TEST_HEADER include/spdk/bdev_zone.h 00:02:12.440 TEST_HEADER include/spdk/bit_array.h 00:02:12.440 TEST_HEADER include/spdk/bit_pool.h 00:02:12.440 TEST_HEADER include/spdk/blob_bdev.h 00:02:12.440 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:12.440 TEST_HEADER include/spdk/blobfs.h 00:02:12.440 TEST_HEADER include/spdk/blob.h 00:02:12.440 TEST_HEADER include/spdk/conf.h 00:02:12.440 TEST_HEADER include/spdk/config.h 00:02:12.440 TEST_HEADER include/spdk/cpuset.h 00:02:12.440 TEST_HEADER include/spdk/crc32.h 00:02:12.440 TEST_HEADER include/spdk/crc16.h 00:02:12.440 TEST_HEADER include/spdk/dif.h 00:02:12.440 TEST_HEADER include/spdk/crc64.h 00:02:12.440 TEST_HEADER include/spdk/dma.h 00:02:12.440 TEST_HEADER include/spdk/endian.h 00:02:12.440 TEST_HEADER include/spdk/env_dpdk.h 00:02:12.440 TEST_HEADER include/spdk/env.h 00:02:12.440 TEST_HEADER include/spdk/event.h 00:02:12.440 TEST_HEADER include/spdk/fd_group.h 00:02:12.440 TEST_HEADER include/spdk/fd.h 00:02:12.440 TEST_HEADER include/spdk/file.h 00:02:12.440 TEST_HEADER include/spdk/ftl.h 00:02:12.440 TEST_HEADER include/spdk/gpt_spec.h 00:02:12.440 TEST_HEADER include/spdk/hexlify.h 00:02:12.440 TEST_HEADER include/spdk/idxd.h 00:02:12.440 TEST_HEADER include/spdk/histogram_data.h 00:02:12.440 TEST_HEADER include/spdk/idxd_spec.h 00:02:12.440 TEST_HEADER include/spdk/ioat.h 00:02:12.440 TEST_HEADER include/spdk/init.h 00:02:12.440 TEST_HEADER include/spdk/iscsi_spec.h 00:02:12.440 TEST_HEADER include/spdk/ioat_spec.h 00:02:12.440 TEST_HEADER include/spdk/jsonrpc.h 00:02:12.440 TEST_HEADER include/spdk/json.h 00:02:12.440 TEST_HEADER include/spdk/keyring.h 00:02:12.440 TEST_HEADER include/spdk/keyring_module.h 00:02:12.440 TEST_HEADER include/spdk/likely.h 00:02:12.440 TEST_HEADER include/spdk/log.h 00:02:12.440 TEST_HEADER include/spdk/lvol.h 00:02:12.440 TEST_HEADER include/spdk/memory.h 00:02:12.440 TEST_HEADER include/spdk/mmio.h 00:02:12.440 TEST_HEADER include/spdk/nbd.h 00:02:12.440 TEST_HEADER include/spdk/notify.h 00:02:12.440 TEST_HEADER include/spdk/nvme.h 00:02:12.440 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:12.440 TEST_HEADER include/spdk/nvme_intel.h 00:02:12.440 TEST_HEADER include/spdk/nvme_spec.h 00:02:12.440 TEST_HEADER include/spdk/nvme_zns.h 00:02:12.440 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:12.440 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:12.440 TEST_HEADER include/spdk/nvmf.h 00:02:12.440 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:12.440 TEST_HEADER include/spdk/nvmf_spec.h 00:02:12.440 TEST_HEADER include/spdk/nvmf_transport.h 00:02:12.440 CC app/spdk_dd/spdk_dd.o 00:02:12.440 TEST_HEADER include/spdk/opal.h 00:02:12.440 TEST_HEADER include/spdk/opal_spec.h 00:02:12.440 TEST_HEADER include/spdk/pipe.h 00:02:12.440 TEST_HEADER include/spdk/pci_ids.h 00:02:12.440 TEST_HEADER include/spdk/queue.h 00:02:12.440 CC app/nvmf_tgt/nvmf_main.o 00:02:12.440 TEST_HEADER include/spdk/rpc.h 00:02:12.440 TEST_HEADER include/spdk/scheduler.h 00:02:12.440 TEST_HEADER include/spdk/reduce.h 00:02:12.706 TEST_HEADER include/spdk/scsi.h 00:02:12.706 TEST_HEADER include/spdk/sock.h 00:02:12.706 TEST_HEADER include/spdk/stdinc.h 00:02:12.706 TEST_HEADER include/spdk/scsi_spec.h 00:02:12.706 TEST_HEADER include/spdk/string.h 00:02:12.706 TEST_HEADER include/spdk/thread.h 00:02:12.706 TEST_HEADER include/spdk/trace.h 00:02:12.706 TEST_HEADER include/spdk/trace_parser.h 00:02:12.706 CC app/iscsi_tgt/iscsi_tgt.o 00:02:12.706 TEST_HEADER include/spdk/ublk.h 00:02:12.706 TEST_HEADER include/spdk/tree.h 00:02:12.706 TEST_HEADER include/spdk/util.h 00:02:12.706 TEST_HEADER include/spdk/uuid.h 00:02:12.706 TEST_HEADER include/spdk/version.h 00:02:12.706 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:12.706 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:12.706 CC app/vhost/vhost.o 00:02:12.706 TEST_HEADER include/spdk/zipf.h 00:02:12.706 TEST_HEADER include/spdk/vhost.h 00:02:12.706 TEST_HEADER include/spdk/vmd.h 00:02:12.706 TEST_HEADER include/spdk/xor.h 00:02:12.706 CXX test/cpp_headers/accel_module.o 00:02:12.706 CXX test/cpp_headers/accel.o 00:02:12.706 CXX test/cpp_headers/assert.o 00:02:12.706 CXX test/cpp_headers/barrier.o 00:02:12.706 CXX test/cpp_headers/base64.o 00:02:12.706 CXX test/cpp_headers/bdev.o 00:02:12.706 CXX test/cpp_headers/bdev_zone.o 00:02:12.706 CXX test/cpp_headers/bdev_module.o 00:02:12.706 CXX test/cpp_headers/bit_pool.o 00:02:12.706 CXX test/cpp_headers/bit_array.o 00:02:12.706 CXX test/cpp_headers/blobfs_bdev.o 00:02:12.706 CXX test/cpp_headers/blob_bdev.o 00:02:12.706 CC app/spdk_tgt/spdk_tgt.o 00:02:12.706 CC examples/nvme/abort/abort.o 00:02:12.706 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:12.706 CC examples/vmd/lsvmd/lsvmd.o 00:02:12.706 CC examples/nvme/reconnect/reconnect.o 00:02:12.706 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:12.706 CC examples/nvme/hello_world/hello_world.o 00:02:12.706 CC examples/nvme/hotplug/hotplug.o 00:02:12.706 CC examples/util/zipf/zipf.o 00:02:12.707 CC examples/nvme/arbitration/arbitration.o 00:02:12.707 CC examples/vmd/led/led.o 00:02:12.707 CC examples/ioat/perf/perf.o 00:02:12.707 CC examples/sock/hello_world/hello_sock.o 00:02:12.707 CC examples/ioat/verify/verify.o 00:02:12.707 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:12.707 CC test/app/stub/stub.o 00:02:12.707 CC examples/idxd/perf/perf.o 00:02:12.707 CC examples/accel/perf/accel_perf.o 00:02:12.707 CC test/app/histogram_perf/histogram_perf.o 00:02:12.707 CC test/env/pci/pci_ut.o 00:02:12.707 CC test/event/reactor/reactor.o 00:02:12.707 CC test/env/vtophys/vtophys.o 00:02:12.707 CC test/app/jsoncat/jsoncat.o 00:02:12.707 CC test/thread/lock/spdk_lock.o 00:02:12.707 CC test/event/event_perf/event_perf.o 00:02:12.707 CC test/thread/poller_perf/poller_perf.o 00:02:12.707 CC test/env/memory/memory_ut.o 00:02:12.707 CC test/nvme/aer/aer.o 00:02:12.707 CC test/nvme/e2edp/nvme_dp.o 00:02:12.707 CC test/nvme/reserve/reserve.o 00:02:12.707 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:12.707 CC test/nvme/err_injection/err_injection.o 00:02:12.707 CC test/nvme/overhead/overhead.o 00:02:12.707 CC test/nvme/simple_copy/simple_copy.o 00:02:12.707 CC test/nvme/sgl/sgl.o 00:02:12.707 CC test/nvme/fused_ordering/fused_ordering.o 00:02:12.707 CC test/nvme/fdp/fdp.o 00:02:12.707 CC test/event/reactor_perf/reactor_perf.o 00:02:12.707 CC test/nvme/cuse/cuse.o 00:02:12.707 CC examples/blob/cli/blobcli.o 00:02:12.707 CC app/fio/nvme/fio_plugin.o 00:02:12.707 CC test/nvme/reset/reset.o 00:02:12.707 CC test/nvme/boot_partition/boot_partition.o 00:02:12.707 CC test/nvme/startup/startup.o 00:02:12.707 CC test/nvme/connect_stress/connect_stress.o 00:02:12.707 CC test/bdev/bdevio/bdevio.o 00:02:12.707 CC test/nvme/compliance/nvme_compliance.o 00:02:12.707 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:12.707 CC test/event/app_repeat/app_repeat.o 00:02:12.707 CC examples/thread/thread/thread_ex.o 00:02:12.707 CC examples/bdev/hello_world/hello_bdev.o 00:02:12.707 CC examples/blob/hello_world/hello_blob.o 00:02:12.707 CC examples/nvmf/nvmf/nvmf.o 00:02:12.707 CC examples/bdev/bdevperf/bdevperf.o 00:02:12.707 CC test/blobfs/mkfs/mkfs.o 00:02:12.707 CC test/accel/dif/dif.o 00:02:12.707 LINK spdk_lspci 00:02:12.707 CC test/app/bdev_svc/bdev_svc.o 00:02:12.707 CC test/event/scheduler/scheduler.o 00:02:12.707 CC app/fio/bdev/fio_plugin.o 00:02:12.707 CC test/dma/test_dma/test_dma.o 00:02:12.707 CC test/env/mem_callbacks/mem_callbacks.o 00:02:12.707 LINK rpc_client_test 00:02:12.707 CC test/lvol/esnap/esnap.o 00:02:12.707 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:12.707 LINK spdk_nvme_discover 00:02:12.707 LINK interrupt_tgt 00:02:12.707 LINK lsvmd 00:02:12.707 CXX test/cpp_headers/blobfs.o 00:02:12.707 LINK led 00:02:12.707 LINK vtophys 00:02:12.707 LINK spdk_trace_record 00:02:12.707 LINK zipf 00:02:12.707 CXX test/cpp_headers/blob.o 00:02:12.707 LINK nvmf_tgt 00:02:12.707 LINK histogram_perf 00:02:12.973 CXX test/cpp_headers/conf.o 00:02:12.973 LINK reactor 00:02:12.973 LINK jsoncat 00:02:12.973 LINK event_perf 00:02:12.973 LINK poller_perf 00:02:12.973 CXX test/cpp_headers/config.o 00:02:12.973 LINK vhost 00:02:12.973 CXX test/cpp_headers/cpuset.o 00:02:12.973 CXX test/cpp_headers/crc16.o 00:02:12.973 CXX test/cpp_headers/crc32.o 00:02:12.973 LINK reactor_perf 00:02:12.973 LINK env_dpdk_post_init 00:02:12.973 LINK pmr_persistence 00:02:12.973 CXX test/cpp_headers/crc64.o 00:02:12.973 LINK iscsi_tgt 00:02:12.973 LINK stub 00:02:12.973 LINK app_repeat 00:02:12.973 LINK verify 00:02:12.973 LINK boot_partition 00:02:12.973 LINK cmb_copy 00:02:12.973 LINK connect_stress 00:02:12.973 CXX test/cpp_headers/dif.o 00:02:12.973 LINK hello_world 00:02:12.973 LINK err_injection 00:02:12.973 LINK ioat_perf 00:02:12.973 LINK startup 00:02:12.973 LINK fused_ordering 00:02:12.973 LINK reserve 00:02:12.973 LINK hotplug 00:02:12.973 LINK doorbell_aers 00:02:12.973 LINK spdk_tgt 00:02:12.973 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:12.973 fio_plugin.c:1559:29: warning: field 'ruhs' with variable sized type 'struct spdk_nvme_fdp_ruhs' not at the end of a struct or class is a GNU extension [-Wgnu-variable-sized-type-not-at-end] 00:02:12.973 struct spdk_nvme_fdp_ruhs ruhs; 00:02:12.973 ^ 00:02:12.973 LINK hello_sock 00:02:12.973 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:12.973 LINK mkfs 00:02:12.973 LINK bdev_svc 00:02:12.973 LINK simple_copy 00:02:12.973 LINK nvme_dp 00:02:12.973 LINK hello_blob 00:02:12.973 LINK reset 00:02:12.973 CC test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.o 00:02:12.973 LINK aer 00:02:12.973 LINK thread 00:02:12.973 CC test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.o 00:02:12.973 LINK hello_bdev 00:02:12.973 LINK sgl 00:02:12.973 LINK overhead 00:02:12.973 LINK scheduler 00:02:12.974 LINK fdp 00:02:12.974 CXX test/cpp_headers/dma.o 00:02:12.974 LINK spdk_trace 00:02:12.974 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:12.974 LINK idxd_perf 00:02:12.974 LINK abort 00:02:12.974 CXX test/cpp_headers/env_dpdk.o 00:02:12.974 CXX test/cpp_headers/endian.o 00:02:12.974 LINK reconnect 00:02:12.974 LINK arbitration 00:02:12.974 CXX test/cpp_headers/env.o 00:02:13.237 CXX test/cpp_headers/event.o 00:02:13.237 CXX test/cpp_headers/fd_group.o 00:02:13.237 LINK nvmf 00:02:13.237 CXX test/cpp_headers/fd.o 00:02:13.237 CXX test/cpp_headers/file.o 00:02:13.237 LINK bdevio 00:02:13.237 LINK nvme_manage 00:02:13.237 LINK test_dma 00:02:13.237 LINK spdk_dd 00:02:13.237 LINK pci_ut 00:02:13.237 LINK nvme_compliance 00:02:13.237 LINK accel_perf 00:02:13.237 LINK blobcli 00:02:13.237 CXX test/cpp_headers/ftl.o 00:02:13.237 LINK nvme_fuzz 00:02:13.237 LINK dif 00:02:13.237 CXX test/cpp_headers/gpt_spec.o 00:02:13.237 CXX test/cpp_headers/hexlify.o 00:02:13.237 CXX test/cpp_headers/histogram_data.o 00:02:13.237 CXX test/cpp_headers/idxd.o 00:02:13.237 CXX test/cpp_headers/idxd_spec.o 00:02:13.237 CXX test/cpp_headers/init.o 00:02:13.237 CXX test/cpp_headers/ioat.o 00:02:13.237 CXX test/cpp_headers/ioat_spec.o 00:02:13.237 1 warning generated. 00:02:13.558 LINK spdk_bdev 00:02:13.558 CXX test/cpp_headers/iscsi_spec.o 00:02:13.558 CXX test/cpp_headers/json.o 00:02:13.558 LINK mem_callbacks 00:02:13.558 LINK llvm_vfio_fuzz 00:02:13.558 CXX test/cpp_headers/jsonrpc.o 00:02:13.558 LINK spdk_nvme 00:02:13.558 CXX test/cpp_headers/keyring.o 00:02:13.558 CXX test/cpp_headers/keyring_module.o 00:02:13.558 CXX test/cpp_headers/likely.o 00:02:13.558 LINK spdk_nvme_identify 00:02:13.558 CXX test/cpp_headers/log.o 00:02:13.558 LINK spdk_nvme_perf 00:02:13.558 CXX test/cpp_headers/lvol.o 00:02:13.558 CXX test/cpp_headers/memory.o 00:02:13.558 CXX test/cpp_headers/mmio.o 00:02:13.558 CXX test/cpp_headers/nbd.o 00:02:13.558 LINK vhost_fuzz 00:02:13.558 CXX test/cpp_headers/notify.o 00:02:13.558 CXX test/cpp_headers/nvme.o 00:02:13.558 CXX test/cpp_headers/nvme_intel.o 00:02:13.558 CXX test/cpp_headers/nvme_ocssd.o 00:02:13.558 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:13.558 CXX test/cpp_headers/nvme_spec.o 00:02:13.558 CXX test/cpp_headers/nvme_zns.o 00:02:13.558 LINK bdevperf 00:02:13.558 CXX test/cpp_headers/nvmf_cmd.o 00:02:13.558 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:13.558 LINK spdk_top 00:02:13.558 CXX test/cpp_headers/nvmf.o 00:02:13.558 CXX test/cpp_headers/nvmf_spec.o 00:02:13.558 CXX test/cpp_headers/nvmf_transport.o 00:02:13.853 CXX test/cpp_headers/opal.o 00:02:13.853 CXX test/cpp_headers/opal_spec.o 00:02:13.853 CXX test/cpp_headers/pci_ids.o 00:02:13.853 CXX test/cpp_headers/pipe.o 00:02:13.853 CXX test/cpp_headers/queue.o 00:02:13.853 CXX test/cpp_headers/reduce.o 00:02:13.853 CXX test/cpp_headers/rpc.o 00:02:13.853 CXX test/cpp_headers/scheduler.o 00:02:13.853 CXX test/cpp_headers/scsi.o 00:02:13.853 CXX test/cpp_headers/scsi_spec.o 00:02:13.853 CXX test/cpp_headers/sock.o 00:02:13.853 CXX test/cpp_headers/stdinc.o 00:02:13.853 CXX test/cpp_headers/string.o 00:02:13.853 CXX test/cpp_headers/thread.o 00:02:13.853 CXX test/cpp_headers/trace.o 00:02:13.853 CXX test/cpp_headers/trace_parser.o 00:02:13.853 CXX test/cpp_headers/tree.o 00:02:13.853 CXX test/cpp_headers/ublk.o 00:02:13.853 CXX test/cpp_headers/util.o 00:02:13.853 CXX test/cpp_headers/uuid.o 00:02:13.853 CXX test/cpp_headers/version.o 00:02:13.853 CXX test/cpp_headers/vfio_user_pci.o 00:02:13.853 CXX test/cpp_headers/vfio_user_spec.o 00:02:13.853 CXX test/cpp_headers/vhost.o 00:02:13.853 CXX test/cpp_headers/vmd.o 00:02:13.853 CXX test/cpp_headers/xor.o 00:02:13.853 LINK llvm_nvme_fuzz 00:02:13.853 CXX test/cpp_headers/zipf.o 00:02:14.112 LINK memory_ut 00:02:14.112 LINK cuse 00:02:14.370 LINK spdk_lock 00:02:14.629 LINK iscsi_fuzz 00:02:17.179 LINK esnap 00:02:17.179 00:02:17.179 real 0m37.476s 00:02:17.179 user 6m30.207s 00:02:17.179 sys 2m15.829s 00:02:17.179 20:01:04 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:17.179 20:01:04 make -- common/autotest_common.sh@10 -- $ set +x 00:02:17.179 ************************************ 00:02:17.179 END TEST make 00:02:17.179 ************************************ 00:02:17.179 20:01:04 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:17.179 20:01:04 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:17.179 20:01:04 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:17.179 20:01:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:17.179 20:01:04 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:17.179 20:01:04 -- pm/common@44 -- $ pid=1540054 00:02:17.179 20:01:04 -- pm/common@50 -- $ kill -TERM 1540054 00:02:17.179 20:01:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:17.179 20:01:04 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:17.179 20:01:04 -- pm/common@44 -- $ pid=1540056 00:02:17.179 20:01:04 -- pm/common@50 -- $ kill -TERM 1540056 00:02:17.179 20:01:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:17.179 20:01:04 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:17.179 20:01:04 -- pm/common@44 -- $ pid=1540057 00:02:17.179 20:01:04 -- pm/common@50 -- $ kill -TERM 1540057 00:02:17.179 20:01:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:17.179 20:01:04 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:17.179 20:01:04 -- pm/common@44 -- $ pid=1540089 00:02:17.179 20:01:04 -- pm/common@50 -- $ sudo -E kill -TERM 1540089 00:02:17.179 20:01:04 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:02:17.179 20:01:04 -- nvmf/common.sh@7 -- # uname -s 00:02:17.179 20:01:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:17.179 20:01:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:17.179 20:01:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:17.179 20:01:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:17.179 20:01:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:17.179 20:01:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:17.179 20:01:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:17.179 20:01:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:17.179 20:01:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:17.179 20:01:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:17.440 20:01:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8089bee2-271d-eb11-906e-0017a4403562 00:02:17.440 20:01:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=8089bee2-271d-eb11-906e-0017a4403562 00:02:17.440 20:01:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:17.440 20:01:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:17.440 20:01:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:17.440 20:01:04 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:17.440 20:01:04 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:02:17.440 20:01:04 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:17.440 20:01:04 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:17.440 20:01:04 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:17.440 20:01:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:17.440 20:01:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:17.440 20:01:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:17.440 20:01:04 -- paths/export.sh@5 -- # export PATH 00:02:17.440 20:01:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:17.440 20:01:04 -- nvmf/common.sh@47 -- # : 0 00:02:17.440 20:01:04 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:17.440 20:01:04 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:17.440 20:01:04 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:17.440 20:01:04 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:17.440 20:01:04 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:17.440 20:01:04 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:17.440 20:01:04 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:17.440 20:01:04 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:17.440 20:01:04 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:17.440 20:01:04 -- spdk/autotest.sh@32 -- # uname -s 00:02:17.440 20:01:04 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:17.440 20:01:04 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:17.440 20:01:04 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:02:17.440 20:01:04 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:17.440 20:01:04 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:02:17.440 20:01:04 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:17.440 20:01:04 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:17.440 20:01:04 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:17.440 20:01:04 -- spdk/autotest.sh@48 -- # udevadm_pid=1599428 00:02:17.440 20:01:04 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:17.440 20:01:04 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:17.440 20:01:04 -- pm/common@17 -- # local monitor 00:02:17.440 20:01:04 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:17.440 20:01:04 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:17.440 20:01:04 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:17.441 20:01:04 -- pm/common@21 -- # date +%s 00:02:17.441 20:01:04 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:17.441 20:01:04 -- pm/common@21 -- # date +%s 00:02:17.441 20:01:04 -- pm/common@25 -- # sleep 1 00:02:17.441 20:01:04 -- pm/common@21 -- # date +%s 00:02:17.441 20:01:04 -- pm/common@21 -- # date +%s 00:02:17.441 20:01:04 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715882464 00:02:17.441 20:01:04 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715882464 00:02:17.441 20:01:04 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715882464 00:02:17.441 20:01:04 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715882464 00:02:17.441 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715882464_collect-vmstat.pm.log 00:02:17.441 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715882464_collect-cpu-load.pm.log 00:02:17.441 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715882464_collect-cpu-temp.pm.log 00:02:17.441 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715882464_collect-bmc-pm.bmc.pm.log 00:02:18.378 20:01:05 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:18.378 20:01:05 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:18.378 20:01:05 -- common/autotest_common.sh@720 -- # xtrace_disable 00:02:18.378 20:01:05 -- common/autotest_common.sh@10 -- # set +x 00:02:18.378 20:01:05 -- spdk/autotest.sh@59 -- # create_test_list 00:02:18.378 20:01:05 -- common/autotest_common.sh@744 -- # xtrace_disable 00:02:18.378 20:01:05 -- common/autotest_common.sh@10 -- # set +x 00:02:18.378 20:01:05 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh 00:02:18.378 20:01:05 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:18.378 20:01:05 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:18.378 20:01:05 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:02:18.378 20:01:05 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:18.378 20:01:05 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:18.378 20:01:05 -- common/autotest_common.sh@1451 -- # uname 00:02:18.378 20:01:05 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:02:18.378 20:01:05 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:18.378 20:01:05 -- common/autotest_common.sh@1471 -- # uname 00:02:18.378 20:01:05 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:02:18.378 20:01:05 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:18.378 20:01:05 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=clang 00:02:18.378 20:01:05 -- spdk/autotest.sh@72 -- # hash lcov 00:02:18.378 20:01:05 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=clang == *\c\l\a\n\g* ]] 00:02:18.378 20:01:05 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:18.378 20:01:05 -- common/autotest_common.sh@720 -- # xtrace_disable 00:02:18.378 20:01:05 -- common/autotest_common.sh@10 -- # set +x 00:02:18.378 20:01:05 -- spdk/autotest.sh@91 -- # rm -f 00:02:18.378 20:01:05 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:02:21.675 0000:5f:00.0 (8086 0a54): Already using the nvme driver 00:02:21.675 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:02:21.675 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:21.675 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:21.675 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:21.675 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:21.675 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:21.675 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:21.675 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:21.675 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:21.675 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:21.675 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:21.675 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:21.675 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:21.675 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:21.675 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:21.675 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:21.675 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:21.675 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:02:21.675 20:01:08 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:21.675 20:01:08 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:02:21.675 20:01:08 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:02:21.675 20:01:08 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:02:21.675 20:01:08 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:02:21.675 20:01:08 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:02:21.675 20:01:08 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:02:21.675 20:01:08 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:21.675 20:01:08 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:02:21.675 20:01:08 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:02:21.675 20:01:08 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:02:21.675 20:01:08 -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:02:21.675 20:01:08 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:02:21.675 20:01:08 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:02:21.675 20:01:08 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:02:21.675 20:01:08 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n1 00:02:21.675 20:01:08 -- common/autotest_common.sh@1658 -- # local device=nvme2n1 00:02:21.675 20:01:08 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:02:21.675 20:01:08 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:02:21.675 20:01:08 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:21.675 20:01:08 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:21.675 20:01:08 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:21.675 20:01:08 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:21.675 20:01:08 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:21.676 20:01:08 -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:21.676 No valid GPT data, bailing 00:02:21.676 20:01:08 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:21.676 20:01:08 -- scripts/common.sh@391 -- # pt= 00:02:21.676 20:01:08 -- scripts/common.sh@392 -- # return 1 00:02:21.676 20:01:08 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:21.676 1+0 records in 00:02:21.676 1+0 records out 00:02:21.676 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00216053 s, 485 MB/s 00:02:21.676 20:01:08 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:21.676 20:01:08 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:21.676 20:01:08 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:02:21.676 20:01:08 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:02:21.676 20:01:08 -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:02:21.676 No valid GPT data, bailing 00:02:21.676 20:01:08 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:02:21.676 20:01:08 -- scripts/common.sh@391 -- # pt= 00:02:21.676 20:01:08 -- scripts/common.sh@392 -- # return 1 00:02:21.676 20:01:08 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:02:21.676 1+0 records in 00:02:21.676 1+0 records out 00:02:21.676 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00137382 s, 763 MB/s 00:02:21.676 20:01:08 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:21.676 20:01:08 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:21.676 20:01:08 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n1 00:02:21.676 20:01:08 -- scripts/common.sh@378 -- # local block=/dev/nvme2n1 pt 00:02:21.676 20:01:08 -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:02:21.676 No valid GPT data, bailing 00:02:21.676 20:01:08 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:02:21.676 20:01:08 -- scripts/common.sh@391 -- # pt= 00:02:21.676 20:01:08 -- scripts/common.sh@392 -- # return 1 00:02:21.676 20:01:08 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:02:21.676 1+0 records in 00:02:21.676 1+0 records out 00:02:21.676 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00517748 s, 203 MB/s 00:02:21.676 20:01:08 -- spdk/autotest.sh@118 -- # sync 00:02:21.676 20:01:08 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:21.676 20:01:08 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:21.676 20:01:08 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:26.950 20:01:13 -- spdk/autotest.sh@124 -- # uname -s 00:02:26.950 20:01:13 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:26.950 20:01:13 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:02:26.950 20:01:13 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:26.950 20:01:13 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:26.950 20:01:13 -- common/autotest_common.sh@10 -- # set +x 00:02:26.950 ************************************ 00:02:26.950 START TEST setup.sh 00:02:26.950 ************************************ 00:02:26.950 20:01:13 setup.sh -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:02:26.950 * Looking for test storage... 00:02:26.950 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:02:26.950 20:01:13 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:26.950 20:01:13 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:26.950 20:01:13 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:02:26.950 20:01:13 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:26.950 20:01:13 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:26.950 20:01:13 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:26.950 ************************************ 00:02:26.950 START TEST acl 00:02:26.950 ************************************ 00:02:26.950 20:01:13 setup.sh.acl -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:02:26.950 * Looking for test storage... 00:02:26.950 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:02:26.950 20:01:13 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:26.950 20:01:13 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:02:26.950 20:01:13 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:02:26.950 20:01:13 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:02:26.950 20:01:13 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:02:26.950 20:01:13 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:02:26.950 20:01:13 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:02:26.950 20:01:13 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:26.950 20:01:13 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:02:26.950 20:01:13 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:02:26.950 20:01:13 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:02:26.950 20:01:13 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:02:26.950 20:01:13 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:02:26.950 20:01:13 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:02:26.950 20:01:13 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:02:26.950 20:01:13 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n1 00:02:26.950 20:01:13 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme2n1 00:02:26.950 20:01:13 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:02:26.950 20:01:13 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:02:26.950 20:01:13 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:26.950 20:01:13 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:26.950 20:01:13 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:26.950 20:01:13 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:26.950 20:01:13 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:26.950 20:01:13 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:26.950 20:01:13 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:02:30.242 20:01:16 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:30.242 20:01:16 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:30.242 20:01:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:30.242 20:01:16 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:30.242 20:01:16 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:30.242 20:01:16 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:02:32.780 Hugepages 00:02:32.780 node hugesize free / total 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:32.780 00:02:32.780 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5f:00.0 == *:*:*.* ]] 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\f\:\0\0\.\0* ]] 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:32.780 20:01:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:32.781 20:01:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:32.781 20:01:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:32.781 20:01:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:32.781 20:01:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:32.781 20:01:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:32.781 20:01:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:32.781 20:01:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:32.781 20:01:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:32.781 20:01:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:32.781 20:01:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:02:32.781 20:01:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:32.781 20:01:19 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:02:32.781 20:01:19 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:32.781 20:01:19 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:32.781 20:01:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:32.781 20:01:19 setup.sh.acl -- setup/acl.sh@24 -- # (( 3 > 0 )) 00:02:32.781 20:01:19 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:32.781 20:01:19 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:32.781 20:01:19 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:32.781 20:01:19 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:32.781 ************************************ 00:02:32.781 START TEST denied 00:02:32.781 ************************************ 00:02:32.781 20:01:19 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:02:32.781 20:01:19 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5e:00.0' 00:02:32.781 20:01:19 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:32.781 20:01:19 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:32.781 20:01:19 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:02:32.781 20:01:19 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:02:36.972 0000:5e:00.0 (8086 0a54): Skipping denied controller at 0000:5e:00.0 00:02:36.972 20:01:23 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:02:36.972 20:01:23 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:36.972 20:01:23 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:36.972 20:01:23 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:02:36.972 20:01:23 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:02:36.972 20:01:23 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:36.972 20:01:23 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:36.972 20:01:23 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:36.972 20:01:23 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:36.972 20:01:23 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:02:41.167 00:02:41.167 real 0m7.893s 00:02:41.167 user 0m2.158s 00:02:41.167 sys 0m3.907s 00:02:41.167 20:01:27 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:41.167 20:01:27 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:02:41.167 ************************************ 00:02:41.167 END TEST denied 00:02:41.167 ************************************ 00:02:41.167 20:01:27 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:41.167 20:01:27 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:41.167 20:01:27 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:41.167 20:01:27 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:41.167 ************************************ 00:02:41.167 START TEST allowed 00:02:41.167 ************************************ 00:02:41.167 20:01:27 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:02:41.167 20:01:27 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:02:41.167 20:01:27 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:02:41.167 20:01:27 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:02:41.167 20:01:27 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:02:41.167 20:01:27 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:02:45.367 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:02:45.367 20:01:31 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:5f:00.0 0000:d8:00.0 00:02:45.367 20:01:31 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:02:45.367 20:01:31 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:02:45.367 20:01:31 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5f:00.0 ]] 00:02:45.367 20:01:31 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5f:00.0/driver 00:02:45.367 20:01:31 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:45.367 20:01:31 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:45.367 20:01:31 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:02:45.367 20:01:31 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:02:45.367 20:01:31 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:02:45.367 20:01:31 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:45.367 20:01:31 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:45.367 20:01:31 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:02:45.367 20:01:31 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:45.367 20:01:31 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:02:47.904 00:02:47.904 real 0m6.768s 00:02:47.904 user 0m2.086s 00:02:47.904 sys 0m3.773s 00:02:47.904 20:01:34 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:47.904 20:01:34 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:02:47.904 ************************************ 00:02:47.904 END TEST allowed 00:02:47.904 ************************************ 00:02:47.904 00:02:47.904 real 0m20.744s 00:02:47.904 user 0m6.528s 00:02:47.904 sys 0m11.596s 00:02:47.904 20:01:34 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:47.904 20:01:34 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:47.904 ************************************ 00:02:47.904 END TEST acl 00:02:47.904 ************************************ 00:02:47.904 20:01:34 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:02:47.904 20:01:34 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:47.904 20:01:34 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:47.904 20:01:34 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:47.904 ************************************ 00:02:47.904 START TEST hugepages 00:02:47.904 ************************************ 00:02:47.904 20:01:34 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:02:47.904 * Looking for test storage... 00:02:47.904 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:02:47.904 20:01:34 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:02:47.904 20:01:34 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:02:47.904 20:01:34 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:02:47.904 20:01:34 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:02:47.904 20:01:34 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:02:47.904 20:01:34 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:02:47.904 20:01:34 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:02:47.904 20:01:34 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:02:47.904 20:01:34 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:02:47.904 20:01:34 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:02:47.904 20:01:34 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:47.904 20:01:34 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:47.904 20:01:34 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:47.904 20:01:34 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:02:47.904 20:01:34 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:47.904 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.904 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.904 20:01:34 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97453872 kB' 'MemFree: 79037596 kB' 'MemAvailable: 79751164 kB' 'Buffers: 1308 kB' 'Cached: 9637712 kB' 'SwapCached: 0 kB' 'Active: 9786892 kB' 'Inactive: 476248 kB' 'Active(anon): 9173868 kB' 'Inactive(anon): 0 kB' 'Active(file): 613024 kB' 'Inactive(file): 476248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 627500 kB' 'Mapped: 196112 kB' 'Shmem: 8549748 kB' 'KReclaimable: 481136 kB' 'Slab: 1105540 kB' 'SReclaimable: 481136 kB' 'SUnreclaim: 624404 kB' 'KernelStack: 19968 kB' 'PageTables: 10080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 55018388 kB' 'Committed_AS: 10643408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212568 kB' 'VmallocChunk: 0 kB' 'Percpu: 109824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3163092 kB' 'DirectMap2M: 31119360 kB' 'DirectMap1G: 67108864 kB' 00:02:47.904 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.904 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.904 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.904 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.904 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.904 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.904 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.904 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.904 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.904 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.904 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.904 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.904 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.904 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.904 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.904 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.905 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:47.906 20:01:34 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:02:47.906 20:01:34 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:47.906 20:01:34 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:47.906 20:01:34 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:47.906 ************************************ 00:02:47.906 START TEST default_setup 00:02:47.906 ************************************ 00:02:47.906 20:01:34 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:02:47.906 20:01:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:02:47.906 20:01:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:02:47.906 20:01:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:47.906 20:01:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:02:47.906 20:01:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:47.906 20:01:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:02:47.906 20:01:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:47.906 20:01:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:47.906 20:01:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:47.906 20:01:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:47.906 20:01:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:02:47.906 20:01:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:47.906 20:01:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:47.906 20:01:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:47.906 20:01:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:47.906 20:01:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:47.906 20:01:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:47.906 20:01:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:47.906 20:01:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:02:47.906 20:01:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:02:47.906 20:01:34 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:02:47.906 20:01:34 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:02:50.440 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:02:50.440 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:02:50.440 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:02:50.440 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:02:50.440 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:02:50.440 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:02:50.698 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:02:50.698 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:02:50.698 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:02:50.699 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:02:50.699 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:02:50.699 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:02:50.699 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:02:50.699 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:02:50.699 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:02:50.699 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:02:51.644 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:02:51.644 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:02:51.644 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:02:51.914 20:01:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:02:51.914 20:01:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:02:51.914 20:01:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:02:51.914 20:01:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:02:51.914 20:01:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:02:51.914 20:01:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:02:51.914 20:01:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:02:51.914 20:01:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:51.914 20:01:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:51.914 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:51.914 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:51.914 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:51.914 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:51.914 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.914 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:51.914 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:51.914 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.914 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.914 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.914 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97453872 kB' 'MemFree: 81441208 kB' 'MemAvailable: 82153112 kB' 'Buffers: 1308 kB' 'Cached: 9637816 kB' 'SwapCached: 0 kB' 'Active: 9802844 kB' 'Inactive: 476248 kB' 'Active(anon): 9189820 kB' 'Inactive(anon): 0 kB' 'Active(file): 613024 kB' 'Inactive(file): 476248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 643744 kB' 'Mapped: 195956 kB' 'Shmem: 8549852 kB' 'KReclaimable: 479472 kB' 'Slab: 1098720 kB' 'SReclaimable: 479472 kB' 'SUnreclaim: 619248 kB' 'KernelStack: 19792 kB' 'PageTables: 9300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 56066964 kB' 'Committed_AS: 10662956 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212488 kB' 'VmallocChunk: 0 kB' 'Percpu: 109824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3163092 kB' 'DirectMap2M: 31119360 kB' 'DirectMap1G: 67108864 kB' 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.915 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97453872 kB' 'MemFree: 81441284 kB' 'MemAvailable: 82153188 kB' 'Buffers: 1308 kB' 'Cached: 9637816 kB' 'SwapCached: 0 kB' 'Active: 9803960 kB' 'Inactive: 476248 kB' 'Active(anon): 9190936 kB' 'Inactive(anon): 0 kB' 'Active(file): 613024 kB' 'Inactive(file): 476248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 644356 kB' 'Mapped: 196032 kB' 'Shmem: 8549852 kB' 'KReclaimable: 479472 kB' 'Slab: 1098696 kB' 'SReclaimable: 479472 kB' 'SUnreclaim: 619224 kB' 'KernelStack: 20016 kB' 'PageTables: 9424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 56066964 kB' 'Committed_AS: 10662972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212392 kB' 'VmallocChunk: 0 kB' 'Percpu: 109824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3163092 kB' 'DirectMap2M: 31119360 kB' 'DirectMap1G: 67108864 kB' 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.916 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.917 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97453872 kB' 'MemFree: 81441088 kB' 'MemAvailable: 82152992 kB' 'Buffers: 1308 kB' 'Cached: 9637820 kB' 'SwapCached: 0 kB' 'Active: 9802856 kB' 'Inactive: 476248 kB' 'Active(anon): 9189832 kB' 'Inactive(anon): 0 kB' 'Active(file): 613024 kB' 'Inactive(file): 476248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 643212 kB' 'Mapped: 196032 kB' 'Shmem: 8549856 kB' 'KReclaimable: 479472 kB' 'Slab: 1098696 kB' 'SReclaimable: 479472 kB' 'SUnreclaim: 619224 kB' 'KernelStack: 19696 kB' 'PageTables: 9044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 56066964 kB' 'Committed_AS: 10661436 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212312 kB' 'VmallocChunk: 0 kB' 'Percpu: 109824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3163092 kB' 'DirectMap2M: 31119360 kB' 'DirectMap1G: 67108864 kB' 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.918 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.919 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:51.920 nr_hugepages=1024 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:51.920 resv_hugepages=0 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:51.920 surplus_hugepages=0 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:51.920 anon_hugepages=0 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97453872 kB' 'MemFree: 81436708 kB' 'MemAvailable: 82148612 kB' 'Buffers: 1308 kB' 'Cached: 9637856 kB' 'SwapCached: 0 kB' 'Active: 9807288 kB' 'Inactive: 476248 kB' 'Active(anon): 9194264 kB' 'Inactive(anon): 0 kB' 'Active(file): 613024 kB' 'Inactive(file): 476248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 647720 kB' 'Mapped: 196532 kB' 'Shmem: 8549892 kB' 'KReclaimable: 479472 kB' 'Slab: 1098696 kB' 'SReclaimable: 479472 kB' 'SUnreclaim: 619224 kB' 'KernelStack: 19760 kB' 'PageTables: 9124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 56066964 kB' 'Committed_AS: 10665228 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212280 kB' 'VmallocChunk: 0 kB' 'Percpu: 109824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3163092 kB' 'DirectMap2M: 31119360 kB' 'DirectMap1G: 67108864 kB' 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.920 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.921 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48118528 kB' 'MemFree: 36712600 kB' 'MemUsed: 11405928 kB' 'SwapCached: 0 kB' 'Active: 7113828 kB' 'Inactive: 152112 kB' 'Active(anon): 6613956 kB' 'Inactive(anon): 0 kB' 'Active(file): 499872 kB' 'Inactive(file): 152112 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7081580 kB' 'Mapped: 158132 kB' 'AnonPages: 187496 kB' 'Shmem: 6429596 kB' 'KernelStack: 11624 kB' 'PageTables: 4940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 322180 kB' 'Slab: 713644 kB' 'SReclaimable: 322180 kB' 'SUnreclaim: 391464 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.922 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:51.923 node0=1024 expecting 1024 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:51.923 00:02:51.923 real 0m4.150s 00:02:51.923 user 0m1.323s 00:02:51.923 sys 0m1.945s 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:51.923 20:01:38 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:02:51.923 ************************************ 00:02:51.923 END TEST default_setup 00:02:51.923 ************************************ 00:02:51.923 20:01:38 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:02:51.923 20:01:38 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:51.923 20:01:38 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:51.923 20:01:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:51.923 ************************************ 00:02:51.923 START TEST per_node_1G_alloc 00:02:51.923 ************************************ 00:02:51.923 20:01:39 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:02:51.923 20:01:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:02:51.923 20:01:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:02:51.923 20:01:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:02:51.923 20:01:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:02:51.924 20:01:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:02:51.924 20:01:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:02:51.924 20:01:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:02:51.924 20:01:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:51.924 20:01:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:51.924 20:01:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:02:51.924 20:01:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:02:51.924 20:01:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:51.924 20:01:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:51.924 20:01:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:51.924 20:01:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:51.924 20:01:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:51.924 20:01:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:02:51.924 20:01:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:51.924 20:01:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:51.924 20:01:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:51.924 20:01:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:51.924 20:01:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:02:51.924 20:01:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:02:51.924 20:01:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:02:51.924 20:01:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:02:51.924 20:01:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:51.924 20:01:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:02:54.460 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:54.460 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:02:54.460 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:54.460 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:02:54.460 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:02:54.460 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:02:54.460 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:02:54.460 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:02:54.460 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:02:54.460 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:02:54.460 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:02:54.460 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:02:54.460 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:02:54.460 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:02:54.460 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:02:54.460 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:02:54.460 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:02:54.460 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:02:54.460 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:54.723 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:02:54.723 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:02:54.723 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:02:54.723 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:54.723 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:54.723 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:54.723 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:54.723 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:54.723 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:54.723 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:54.723 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:54.723 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:54.723 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:54.723 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:54.723 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:54.723 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:54.723 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:54.723 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:54.723 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:54.723 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.723 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.723 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97453872 kB' 'MemFree: 81486080 kB' 'MemAvailable: 82197856 kB' 'Buffers: 1308 kB' 'Cached: 9637956 kB' 'SwapCached: 0 kB' 'Active: 9804508 kB' 'Inactive: 476248 kB' 'Active(anon): 9191484 kB' 'Inactive(anon): 0 kB' 'Active(file): 613024 kB' 'Inactive(file): 476248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 644684 kB' 'Mapped: 196080 kB' 'Shmem: 8549992 kB' 'KReclaimable: 479344 kB' 'Slab: 1098600 kB' 'SReclaimable: 479344 kB' 'SUnreclaim: 619256 kB' 'KernelStack: 19776 kB' 'PageTables: 9172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 56066964 kB' 'Committed_AS: 10660908 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212424 kB' 'VmallocChunk: 0 kB' 'Percpu: 109824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3163092 kB' 'DirectMap2M: 31119360 kB' 'DirectMap1G: 67108864 kB' 00:02:54.723 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.723 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.723 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.723 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.723 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.723 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.723 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.723 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.723 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.723 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.723 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.723 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.723 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.723 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.723 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.723 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.723 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.723 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.723 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.723 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.723 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97453872 kB' 'MemFree: 81486244 kB' 'MemAvailable: 82198020 kB' 'Buffers: 1308 kB' 'Cached: 9637956 kB' 'SwapCached: 0 kB' 'Active: 9804696 kB' 'Inactive: 476248 kB' 'Active(anon): 9191672 kB' 'Inactive(anon): 0 kB' 'Active(file): 613024 kB' 'Inactive(file): 476248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 644440 kB' 'Mapped: 196120 kB' 'Shmem: 8549992 kB' 'KReclaimable: 479344 kB' 'Slab: 1098596 kB' 'SReclaimable: 479344 kB' 'SUnreclaim: 619252 kB' 'KernelStack: 19744 kB' 'PageTables: 9084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 56066964 kB' 'Committed_AS: 10660924 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212392 kB' 'VmallocChunk: 0 kB' 'Percpu: 109824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3163092 kB' 'DirectMap2M: 31119360 kB' 'DirectMap1G: 67108864 kB' 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.724 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97453872 kB' 'MemFree: 81486944 kB' 'MemAvailable: 82198656 kB' 'Buffers: 1308 kB' 'Cached: 9637976 kB' 'SwapCached: 0 kB' 'Active: 9804220 kB' 'Inactive: 476248 kB' 'Active(anon): 9191196 kB' 'Inactive(anon): 0 kB' 'Active(file): 613024 kB' 'Inactive(file): 476248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 644436 kB' 'Mapped: 196044 kB' 'Shmem: 8550012 kB' 'KReclaimable: 479280 kB' 'Slab: 1098516 kB' 'SReclaimable: 479280 kB' 'SUnreclaim: 619236 kB' 'KernelStack: 19760 kB' 'PageTables: 9124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 56066964 kB' 'Committed_AS: 10660948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212392 kB' 'VmallocChunk: 0 kB' 'Percpu: 109824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3163092 kB' 'DirectMap2M: 31119360 kB' 'DirectMap1G: 67108864 kB' 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.725 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:54.726 nr_hugepages=1024 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:54.726 resv_hugepages=0 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:54.726 surplus_hugepages=0 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:54.726 anon_hugepages=0 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97453872 kB' 'MemFree: 81487264 kB' 'MemAvailable: 82198976 kB' 'Buffers: 1308 kB' 'Cached: 9638000 kB' 'SwapCached: 0 kB' 'Active: 9804280 kB' 'Inactive: 476248 kB' 'Active(anon): 9191256 kB' 'Inactive(anon): 0 kB' 'Active(file): 613024 kB' 'Inactive(file): 476248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 644440 kB' 'Mapped: 196044 kB' 'Shmem: 8550036 kB' 'KReclaimable: 479280 kB' 'Slab: 1098516 kB' 'SReclaimable: 479280 kB' 'SUnreclaim: 619236 kB' 'KernelStack: 19760 kB' 'PageTables: 9124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 56066964 kB' 'Committed_AS: 10660972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212392 kB' 'VmallocChunk: 0 kB' 'Percpu: 109824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3163092 kB' 'DirectMap2M: 31119360 kB' 'DirectMap1G: 67108864 kB' 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.726 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48118528 kB' 'MemFree: 37795332 kB' 'MemUsed: 10323196 kB' 'SwapCached: 0 kB' 'Active: 7113924 kB' 'Inactive: 152112 kB' 'Active(anon): 6614052 kB' 'Inactive(anon): 0 kB' 'Active(file): 499872 kB' 'Inactive(file): 152112 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7081716 kB' 'Mapped: 158148 kB' 'AnonPages: 187440 kB' 'Shmem: 6429732 kB' 'KernelStack: 11624 kB' 'PageTables: 4944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 322180 kB' 'Slab: 713536 kB' 'SReclaimable: 322180 kB' 'SUnreclaim: 391356 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.727 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 49335344 kB' 'MemFree: 43692436 kB' 'MemUsed: 5642908 kB' 'SwapCached: 0 kB' 'Active: 2690308 kB' 'Inactive: 324136 kB' 'Active(anon): 2577156 kB' 'Inactive(anon): 0 kB' 'Active(file): 113152 kB' 'Inactive(file): 324136 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2557612 kB' 'Mapped: 37896 kB' 'AnonPages: 456932 kB' 'Shmem: 2120324 kB' 'KernelStack: 8120 kB' 'PageTables: 4132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 157100 kB' 'Slab: 384980 kB' 'SReclaimable: 157100 kB' 'SUnreclaim: 227880 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.728 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:54.729 node0=512 expecting 512 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:54.729 node1=512 expecting 512 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:54.729 00:02:54.729 real 0m2.849s 00:02:54.729 user 0m1.151s 00:02:54.729 sys 0m1.748s 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:54.729 20:01:41 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:54.729 ************************************ 00:02:54.729 END TEST per_node_1G_alloc 00:02:54.729 ************************************ 00:02:54.988 20:01:41 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:02:54.988 20:01:41 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:54.988 20:01:41 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:54.988 20:01:41 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:54.988 ************************************ 00:02:54.988 START TEST even_2G_alloc 00:02:54.988 ************************************ 00:02:54.988 20:01:41 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:02:54.988 20:01:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:02:54.988 20:01:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:54.988 20:01:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:54.988 20:01:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:54.988 20:01:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:54.988 20:01:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:54.988 20:01:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:54.988 20:01:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:54.988 20:01:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:54.988 20:01:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:54.988 20:01:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:54.988 20:01:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:54.988 20:01:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:54.988 20:01:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:54.988 20:01:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:54.988 20:01:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:54.988 20:01:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:02:54.988 20:01:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:54.988 20:01:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:54.988 20:01:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:54.988 20:01:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:54.988 20:01:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:54.988 20:01:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:54.988 20:01:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:02:54.988 20:01:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:02:54.988 20:01:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:02:54.988 20:01:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:54.988 20:01:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:02:57.598 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:57.598 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:02:57.598 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:57.598 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:02:57.598 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:02:57.598 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:02:57.598 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:02:57.598 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:02:57.598 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:02:57.598 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:02:57.598 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:02:57.598 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:02:57.598 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:02:57.598 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:02:57.598 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:02:57.598 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:02:57.598 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:02:57.598 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:02:57.598 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:57.598 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:02:57.599 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:02:57.599 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:57.599 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:57.599 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:57.599 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:57.599 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:57.599 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:57.599 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:57.599 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:57.599 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:57.599 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:57.599 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:57.599 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:57.599 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:57.599 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:57.599 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:57.599 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:57.599 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.599 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.599 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97453872 kB' 'MemFree: 81603084 kB' 'MemAvailable: 82314796 kB' 'Buffers: 1308 kB' 'Cached: 9638108 kB' 'SwapCached: 0 kB' 'Active: 9802324 kB' 'Inactive: 476248 kB' 'Active(anon): 9189300 kB' 'Inactive(anon): 0 kB' 'Active(file): 613024 kB' 'Inactive(file): 476248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 641956 kB' 'Mapped: 195068 kB' 'Shmem: 8550144 kB' 'KReclaimable: 479280 kB' 'Slab: 1098428 kB' 'SReclaimable: 479280 kB' 'SUnreclaim: 619148 kB' 'KernelStack: 19728 kB' 'PageTables: 8908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 56066964 kB' 'Committed_AS: 10649484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212360 kB' 'VmallocChunk: 0 kB' 'Percpu: 109824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3163092 kB' 'DirectMap2M: 31119360 kB' 'DirectMap1G: 67108864 kB' 00:02:57.599 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.599 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.599 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.599 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.599 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.599 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.599 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.599 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.599 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.599 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.599 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.599 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.599 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.599 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.599 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.599 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.599 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.599 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.599 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.599 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.599 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.599 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.599 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.599 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.599 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.599 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.599 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.599 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.599 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.599 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.599 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.599 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.599 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.599 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.599 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.599 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.599 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.599 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.599 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.599 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.599 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.599 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.599 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.599 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.600 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97453872 kB' 'MemFree: 81603068 kB' 'MemAvailable: 82314780 kB' 'Buffers: 1308 kB' 'Cached: 9638112 kB' 'SwapCached: 0 kB' 'Active: 9801232 kB' 'Inactive: 476248 kB' 'Active(anon): 9188208 kB' 'Inactive(anon): 0 kB' 'Active(file): 613024 kB' 'Inactive(file): 476248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 641364 kB' 'Mapped: 194972 kB' 'Shmem: 8550148 kB' 'KReclaimable: 479280 kB' 'Slab: 1098408 kB' 'SReclaimable: 479280 kB' 'SUnreclaim: 619128 kB' 'KernelStack: 19696 kB' 'PageTables: 8772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 56066964 kB' 'Committed_AS: 10649132 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212264 kB' 'VmallocChunk: 0 kB' 'Percpu: 109824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3163092 kB' 'DirectMap2M: 31119360 kB' 'DirectMap1G: 67108864 kB' 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.601 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.602 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.603 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.604 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.604 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.604 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.604 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.604 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.604 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.604 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.604 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.604 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.604 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.604 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.604 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.604 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.604 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.604 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.604 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.604 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.604 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.604 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.604 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.604 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.604 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.604 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.604 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.604 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:57.604 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:57.604 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:57.604 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:57.604 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:57.604 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:57.604 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:57.604 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:57.604 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:57.604 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:57.604 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:57.604 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:57.604 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:57.604 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.604 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.604 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97453872 kB' 'MemFree: 81607652 kB' 'MemAvailable: 82319364 kB' 'Buffers: 1308 kB' 'Cached: 9638124 kB' 'SwapCached: 0 kB' 'Active: 9801256 kB' 'Inactive: 476248 kB' 'Active(anon): 9188232 kB' 'Inactive(anon): 0 kB' 'Active(file): 613024 kB' 'Inactive(file): 476248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 640860 kB' 'Mapped: 194972 kB' 'Shmem: 8550160 kB' 'KReclaimable: 479280 kB' 'Slab: 1098408 kB' 'SReclaimable: 479280 kB' 'SUnreclaim: 619128 kB' 'KernelStack: 19712 kB' 'PageTables: 8844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 56066964 kB' 'Committed_AS: 10649160 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212264 kB' 'VmallocChunk: 0 kB' 'Percpu: 109824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3163092 kB' 'DirectMap2M: 31119360 kB' 'DirectMap1G: 67108864 kB' 00:02:57.604 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.604 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.604 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.604 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.604 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.604 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.604 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.604 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.604 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.604 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.604 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.604 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.604 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.604 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.604 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.604 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.604 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.604 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.604 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.604 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.604 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.604 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.604 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.604 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.604 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.604 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.604 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.604 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.604 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.605 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.605 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.605 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.605 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.605 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.605 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.605 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.605 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.605 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.605 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.605 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.605 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.605 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.605 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.605 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.605 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.605 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.605 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.605 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.605 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.605 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.605 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.605 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.605 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.605 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.605 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.605 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.605 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.605 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.605 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.605 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.605 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.605 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.605 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.605 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.605 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.605 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.605 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.605 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.605 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.605 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.605 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.605 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.605 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.605 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.605 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.605 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.605 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.605 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.605 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.606 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:57.607 nr_hugepages=1024 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:57.607 resv_hugepages=0 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:57.607 surplus_hugepages=0 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:57.607 anon_hugepages=0 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97453872 kB' 'MemFree: 81610064 kB' 'MemAvailable: 82321776 kB' 'Buffers: 1308 kB' 'Cached: 9638156 kB' 'SwapCached: 0 kB' 'Active: 9801480 kB' 'Inactive: 476248 kB' 'Active(anon): 9188456 kB' 'Inactive(anon): 0 kB' 'Active(file): 613024 kB' 'Inactive(file): 476248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 641532 kB' 'Mapped: 194972 kB' 'Shmem: 8550192 kB' 'KReclaimable: 479280 kB' 'Slab: 1098408 kB' 'SReclaimable: 479280 kB' 'SUnreclaim: 619128 kB' 'KernelStack: 19728 kB' 'PageTables: 8888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 56066964 kB' 'Committed_AS: 10657004 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212264 kB' 'VmallocChunk: 0 kB' 'Percpu: 109824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3163092 kB' 'DirectMap2M: 31119360 kB' 'DirectMap1G: 67108864 kB' 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.607 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.608 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48118528 kB' 'MemFree: 37894520 kB' 'MemUsed: 10224008 kB' 'SwapCached: 0 kB' 'Active: 7113956 kB' 'Inactive: 152112 kB' 'Active(anon): 6614084 kB' 'Inactive(anon): 0 kB' 'Active(file): 499872 kB' 'Inactive(file): 152112 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7081868 kB' 'Mapped: 157080 kB' 'AnonPages: 187452 kB' 'Shmem: 6429884 kB' 'KernelStack: 11688 kB' 'PageTables: 5008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 322180 kB' 'Slab: 713576 kB' 'SReclaimable: 322180 kB' 'SUnreclaim: 391396 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.609 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.610 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.610 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.610 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.610 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.610 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.610 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.610 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.610 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.610 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.610 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.610 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.610 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.610 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.610 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.610 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.610 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.610 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.610 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.610 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.610 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.610 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.610 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.610 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.610 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.610 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.610 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.610 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.610 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.610 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.610 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.610 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.610 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.610 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.610 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.610 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.610 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.610 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:57.871 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 49335344 kB' 'MemFree: 43718624 kB' 'MemUsed: 5616720 kB' 'SwapCached: 0 kB' 'Active: 2687332 kB' 'Inactive: 324136 kB' 'Active(anon): 2574180 kB' 'Inactive(anon): 0 kB' 'Active(file): 113152 kB' 'Inactive(file): 324136 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2557636 kB' 'Mapped: 37896 kB' 'AnonPages: 453908 kB' 'Shmem: 2120348 kB' 'KernelStack: 8056 kB' 'PageTables: 3820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 157100 kB' 'Slab: 384832 kB' 'SReclaimable: 157100 kB' 'SUnreclaim: 227732 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.872 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.873 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.873 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.873 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.873 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.873 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.873 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.873 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.873 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.873 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.873 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.873 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.873 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.873 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.873 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.873 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.873 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.873 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.873 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.873 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.873 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.873 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.873 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.873 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.873 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.873 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.873 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.873 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.873 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.873 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.873 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.873 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.873 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:57.873 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:57.873 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:57.873 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:57.873 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:57.873 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:57.873 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:57.873 node0=512 expecting 512 00:02:57.873 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:57.873 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:57.873 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:57.873 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:57.873 node1=512 expecting 512 00:02:57.873 20:01:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:57.873 00:02:57.873 real 0m2.882s 00:02:57.873 user 0m1.166s 00:02:57.873 sys 0m1.767s 00:02:57.873 20:01:44 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:57.873 20:01:44 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:57.873 ************************************ 00:02:57.873 END TEST even_2G_alloc 00:02:57.873 ************************************ 00:02:57.873 20:01:44 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:02:57.873 20:01:44 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:57.873 20:01:44 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:57.873 20:01:44 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:57.873 ************************************ 00:02:57.873 START TEST odd_alloc 00:02:57.873 ************************************ 00:02:57.873 20:01:44 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:02:57.873 20:01:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:02:57.873 20:01:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:02:57.873 20:01:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:57.873 20:01:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:57.873 20:01:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:02:57.873 20:01:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:57.873 20:01:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:57.873 20:01:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:57.873 20:01:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:02:57.873 20:01:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:57.873 20:01:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:57.873 20:01:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:57.873 20:01:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:57.873 20:01:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:57.873 20:01:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:57.873 20:01:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:57.873 20:01:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:02:57.873 20:01:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:57.873 20:01:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:57.873 20:01:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:02:57.873 20:01:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:57.873 20:01:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:57.873 20:01:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:57.873 20:01:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:02:57.873 20:01:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:02:57.873 20:01:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:02:57.873 20:01:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:57.873 20:01:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:00.409 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:00.409 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:00.409 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:00.409 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:00.409 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:00.409 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:00.409 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:00.409 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:00.409 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:00.409 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:00.409 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:00.409 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:00.409 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:00.409 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:00.409 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:00.409 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:00.409 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:00.409 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:00.409 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97453872 kB' 'MemFree: 81641700 kB' 'MemAvailable: 82353412 kB' 'Buffers: 1308 kB' 'Cached: 9638260 kB' 'SwapCached: 0 kB' 'Active: 9801924 kB' 'Inactive: 476248 kB' 'Active(anon): 9188900 kB' 'Inactive(anon): 0 kB' 'Active(file): 613024 kB' 'Inactive(file): 476248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 641816 kB' 'Mapped: 195076 kB' 'Shmem: 8550296 kB' 'KReclaimable: 479280 kB' 'Slab: 1098468 kB' 'SReclaimable: 479280 kB' 'SUnreclaim: 619188 kB' 'KernelStack: 19680 kB' 'PageTables: 8688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 56065940 kB' 'Committed_AS: 10650044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212424 kB' 'VmallocChunk: 0 kB' 'Percpu: 109824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3163092 kB' 'DirectMap2M: 31119360 kB' 'DirectMap1G: 67108864 kB' 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.674 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97453872 kB' 'MemFree: 81641700 kB' 'MemAvailable: 82353412 kB' 'Buffers: 1308 kB' 'Cached: 9638264 kB' 'SwapCached: 0 kB' 'Active: 9801652 kB' 'Inactive: 476248 kB' 'Active(anon): 9188628 kB' 'Inactive(anon): 0 kB' 'Active(file): 613024 kB' 'Inactive(file): 476248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 641540 kB' 'Mapped: 195068 kB' 'Shmem: 8550300 kB' 'KReclaimable: 479280 kB' 'Slab: 1098404 kB' 'SReclaimable: 479280 kB' 'SUnreclaim: 619124 kB' 'KernelStack: 19696 kB' 'PageTables: 8728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 56065940 kB' 'Committed_AS: 10650060 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212408 kB' 'VmallocChunk: 0 kB' 'Percpu: 109824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3163092 kB' 'DirectMap2M: 31119360 kB' 'DirectMap1G: 67108864 kB' 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.675 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97453872 kB' 'MemFree: 81641500 kB' 'MemAvailable: 82353212 kB' 'Buffers: 1308 kB' 'Cached: 9638280 kB' 'SwapCached: 0 kB' 'Active: 9801836 kB' 'Inactive: 476248 kB' 'Active(anon): 9188812 kB' 'Inactive(anon): 0 kB' 'Active(file): 613024 kB' 'Inactive(file): 476248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 641740 kB' 'Mapped: 195068 kB' 'Shmem: 8550316 kB' 'KReclaimable: 479280 kB' 'Slab: 1098444 kB' 'SReclaimable: 479280 kB' 'SUnreclaim: 619164 kB' 'KernelStack: 19712 kB' 'PageTables: 8800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 56065940 kB' 'Committed_AS: 10650080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212408 kB' 'VmallocChunk: 0 kB' 'Percpu: 109824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3163092 kB' 'DirectMap2M: 31119360 kB' 'DirectMap1G: 67108864 kB' 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.676 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:00.677 nr_hugepages=1025 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:00.677 resv_hugepages=0 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:00.677 surplus_hugepages=0 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:00.677 anon_hugepages=0 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97453872 kB' 'MemFree: 81641808 kB' 'MemAvailable: 82353520 kB' 'Buffers: 1308 kB' 'Cached: 9638300 kB' 'SwapCached: 0 kB' 'Active: 9801840 kB' 'Inactive: 476248 kB' 'Active(anon): 9188816 kB' 'Inactive(anon): 0 kB' 'Active(file): 613024 kB' 'Inactive(file): 476248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 641736 kB' 'Mapped: 195068 kB' 'Shmem: 8550336 kB' 'KReclaimable: 479280 kB' 'Slab: 1098444 kB' 'SReclaimable: 479280 kB' 'SUnreclaim: 619164 kB' 'KernelStack: 19712 kB' 'PageTables: 8800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 56065940 kB' 'Committed_AS: 10650104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212408 kB' 'VmallocChunk: 0 kB' 'Percpu: 109824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3163092 kB' 'DirectMap2M: 31119360 kB' 'DirectMap1G: 67108864 kB' 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.677 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48118528 kB' 'MemFree: 37906572 kB' 'MemUsed: 10211956 kB' 'SwapCached: 0 kB' 'Active: 7113684 kB' 'Inactive: 152112 kB' 'Active(anon): 6613812 kB' 'Inactive(anon): 0 kB' 'Active(file): 499872 kB' 'Inactive(file): 152112 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7081968 kB' 'Mapped: 157172 kB' 'AnonPages: 186992 kB' 'Shmem: 6429984 kB' 'KernelStack: 11656 kB' 'PageTables: 4960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 322180 kB' 'Slab: 713572 kB' 'SReclaimable: 322180 kB' 'SUnreclaim: 391392 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 49335344 kB' 'MemFree: 43736180 kB' 'MemUsed: 5599164 kB' 'SwapCached: 0 kB' 'Active: 2688172 kB' 'Inactive: 324136 kB' 'Active(anon): 2575020 kB' 'Inactive(anon): 0 kB' 'Active(file): 113152 kB' 'Inactive(file): 324136 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2557660 kB' 'Mapped: 37896 kB' 'AnonPages: 454744 kB' 'Shmem: 2120372 kB' 'KernelStack: 8056 kB' 'PageTables: 3840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 157100 kB' 'Slab: 384872 kB' 'SReclaimable: 157100 kB' 'SUnreclaim: 227772 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.678 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:00.679 node0=512 expecting 513 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:00.679 node1=513 expecting 512 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:00.679 00:03:00.679 real 0m2.938s 00:03:00.679 user 0m1.193s 00:03:00.679 sys 0m1.810s 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:00.679 20:01:47 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:00.679 ************************************ 00:03:00.679 END TEST odd_alloc 00:03:00.679 ************************************ 00:03:00.679 20:01:47 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:00.679 20:01:47 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:00.679 20:01:47 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:00.679 20:01:47 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:00.939 ************************************ 00:03:00.939 START TEST custom_alloc 00:03:00.939 ************************************ 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:00.939 20:01:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:03.478 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:03.478 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:03.478 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:03.478 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:03.478 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:03.478 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:03.478 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:03.478 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:03.478 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:03.478 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:03.478 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:03.478 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:03.478 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:03.478 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:03.478 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:03.478 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:03.478 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:03.478 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:03.478 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:03.744 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97453872 kB' 'MemFree: 80636524 kB' 'MemAvailable: 81348172 kB' 'Buffers: 1308 kB' 'Cached: 9638412 kB' 'SwapCached: 0 kB' 'Active: 9803992 kB' 'Inactive: 476248 kB' 'Active(anon): 9190968 kB' 'Inactive(anon): 0 kB' 'Active(file): 613024 kB' 'Inactive(file): 476248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 643464 kB' 'Mapped: 195168 kB' 'Shmem: 8550448 kB' 'KReclaimable: 479216 kB' 'Slab: 1098284 kB' 'SReclaimable: 479216 kB' 'SUnreclaim: 619068 kB' 'KernelStack: 19712 kB' 'PageTables: 8768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 55542676 kB' 'Committed_AS: 10650408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212312 kB' 'VmallocChunk: 0 kB' 'Percpu: 109824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3163092 kB' 'DirectMap2M: 31119360 kB' 'DirectMap1G: 67108864 kB' 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.745 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97453872 kB' 'MemFree: 80636524 kB' 'MemAvailable: 81348172 kB' 'Buffers: 1308 kB' 'Cached: 9638412 kB' 'SwapCached: 0 kB' 'Active: 9803812 kB' 'Inactive: 476248 kB' 'Active(anon): 9190788 kB' 'Inactive(anon): 0 kB' 'Active(file): 613024 kB' 'Inactive(file): 476248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 643340 kB' 'Mapped: 195168 kB' 'Shmem: 8550448 kB' 'KReclaimable: 479216 kB' 'Slab: 1098288 kB' 'SReclaimable: 479216 kB' 'SUnreclaim: 619072 kB' 'KernelStack: 19744 kB' 'PageTables: 8876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 55542676 kB' 'Committed_AS: 10650424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212280 kB' 'VmallocChunk: 0 kB' 'Percpu: 109824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3163092 kB' 'DirectMap2M: 31119360 kB' 'DirectMap1G: 67108864 kB' 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.746 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.747 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97453872 kB' 'MemFree: 80638124 kB' 'MemAvailable: 81349772 kB' 'Buffers: 1308 kB' 'Cached: 9638416 kB' 'SwapCached: 0 kB' 'Active: 9803464 kB' 'Inactive: 476248 kB' 'Active(anon): 9190440 kB' 'Inactive(anon): 0 kB' 'Active(file): 613024 kB' 'Inactive(file): 476248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 643452 kB' 'Mapped: 195840 kB' 'Shmem: 8550452 kB' 'KReclaimable: 479216 kB' 'Slab: 1098272 kB' 'SReclaimable: 479216 kB' 'SUnreclaim: 619056 kB' 'KernelStack: 19728 kB' 'PageTables: 8796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 55542676 kB' 'Committed_AS: 10652200 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212264 kB' 'VmallocChunk: 0 kB' 'Percpu: 109824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3163092 kB' 'DirectMap2M: 31119360 kB' 'DirectMap1G: 67108864 kB' 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.748 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.749 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:03.750 nr_hugepages=1536 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:03.750 resv_hugepages=0 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:03.750 surplus_hugepages=0 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:03.750 anon_hugepages=0 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97453872 kB' 'MemFree: 80634940 kB' 'MemAvailable: 81346588 kB' 'Buffers: 1308 kB' 'Cached: 9638416 kB' 'SwapCached: 0 kB' 'Active: 9806912 kB' 'Inactive: 476248 kB' 'Active(anon): 9193888 kB' 'Inactive(anon): 0 kB' 'Active(file): 613024 kB' 'Inactive(file): 476248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 646900 kB' 'Mapped: 195840 kB' 'Shmem: 8550452 kB' 'KReclaimable: 479216 kB' 'Slab: 1098272 kB' 'SReclaimable: 479216 kB' 'SUnreclaim: 619056 kB' 'KernelStack: 19712 kB' 'PageTables: 8752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 55542676 kB' 'Committed_AS: 10655388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212248 kB' 'VmallocChunk: 0 kB' 'Percpu: 109824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3163092 kB' 'DirectMap2M: 31119360 kB' 'DirectMap1G: 67108864 kB' 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.750 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.751 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48118528 kB' 'MemFree: 37931500 kB' 'MemUsed: 10187028 kB' 'SwapCached: 0 kB' 'Active: 7113320 kB' 'Inactive: 152112 kB' 'Active(anon): 6613448 kB' 'Inactive(anon): 0 kB' 'Active(file): 499872 kB' 'Inactive(file): 152112 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7082028 kB' 'Mapped: 157172 kB' 'AnonPages: 186652 kB' 'Shmem: 6430044 kB' 'KernelStack: 11672 kB' 'PageTables: 4892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 322180 kB' 'Slab: 713252 kB' 'SReclaimable: 322180 kB' 'SUnreclaim: 391072 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.752 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.753 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 49335344 kB' 'MemFree: 42705160 kB' 'MemUsed: 6630184 kB' 'SwapCached: 0 kB' 'Active: 2689088 kB' 'Inactive: 324136 kB' 'Active(anon): 2575936 kB' 'Inactive(anon): 0 kB' 'Active(file): 113152 kB' 'Inactive(file): 324136 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2557776 kB' 'Mapped: 37896 kB' 'AnonPages: 455640 kB' 'Shmem: 2120488 kB' 'KernelStack: 8024 kB' 'PageTables: 3812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 156940 kB' 'Slab: 384924 kB' 'SReclaimable: 156940 kB' 'SUnreclaim: 227984 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.754 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.755 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.755 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.755 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.755 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.755 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.755 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.755 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.755 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.755 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.755 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.755 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.755 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.755 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.755 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.755 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.755 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.755 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.755 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.755 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.755 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.755 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.755 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.755 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.755 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.755 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.755 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.755 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.755 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.755 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.755 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.755 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.755 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.755 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.755 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.755 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.755 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.755 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.755 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.755 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.755 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.755 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.755 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.755 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:03.755 20:01:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:03.755 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:03.755 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:03.755 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:03.755 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:03.755 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:03.755 node0=512 expecting 512 00:03:03.755 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:03.755 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:03.755 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:03.755 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:03.755 node1=1024 expecting 1024 00:03:03.755 20:01:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:03.755 00:03:03.755 real 0m2.986s 00:03:03.755 user 0m1.213s 00:03:03.755 sys 0m1.841s 00:03:03.755 20:01:50 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:03.755 20:01:50 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:03.755 ************************************ 00:03:03.755 END TEST custom_alloc 00:03:03.755 ************************************ 00:03:03.755 20:01:50 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:03.755 20:01:50 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:03.755 20:01:50 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:03.755 20:01:50 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:04.014 ************************************ 00:03:04.014 START TEST no_shrink_alloc 00:03:04.014 ************************************ 00:03:04.014 20:01:50 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:03:04.014 20:01:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:04.014 20:01:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:04.014 20:01:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:04.014 20:01:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:04.014 20:01:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:04.014 20:01:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:04.014 20:01:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:04.014 20:01:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:04.014 20:01:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:04.014 20:01:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:04.014 20:01:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:04.014 20:01:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:04.014 20:01:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:04.014 20:01:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:04.014 20:01:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:04.014 20:01:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:04.014 20:01:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:04.014 20:01:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:04.014 20:01:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:04.014 20:01:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:04.014 20:01:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:04.014 20:01:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:06.543 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:06.543 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:06.543 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:06.543 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:06.543 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:06.543 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:06.543 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:06.543 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:06.543 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:06.543 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:06.543 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:06.543 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:06.543 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:06.543 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:06.543 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:06.543 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:06.543 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:06.543 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:06.543 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:06.807 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:06.807 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:06.807 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:06.807 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:06.807 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:06.807 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:06.807 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:06.807 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:06.807 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:06.807 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:06.807 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:06.807 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:06.807 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:06.807 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.807 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.807 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.807 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.807 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.807 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.807 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.807 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97453872 kB' 'MemFree: 81677440 kB' 'MemAvailable: 82388896 kB' 'Buffers: 1308 kB' 'Cached: 9638568 kB' 'SwapCached: 0 kB' 'Active: 9804648 kB' 'Inactive: 476248 kB' 'Active(anon): 9191624 kB' 'Inactive(anon): 0 kB' 'Active(file): 613024 kB' 'Inactive(file): 476248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 644432 kB' 'Mapped: 195176 kB' 'Shmem: 8550604 kB' 'KReclaimable: 479024 kB' 'Slab: 1097712 kB' 'SReclaimable: 479024 kB' 'SUnreclaim: 618688 kB' 'KernelStack: 19856 kB' 'PageTables: 8672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 56066964 kB' 'Committed_AS: 10654008 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212520 kB' 'VmallocChunk: 0 kB' 'Percpu: 109824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3163092 kB' 'DirectMap2M: 31119360 kB' 'DirectMap1G: 67108864 kB' 00:03:06.807 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.807 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.807 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.807 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.807 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.807 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.807 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.807 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.807 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.807 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.807 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.807 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.808 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97453872 kB' 'MemFree: 81676940 kB' 'MemAvailable: 82388396 kB' 'Buffers: 1308 kB' 'Cached: 9638568 kB' 'SwapCached: 0 kB' 'Active: 9804332 kB' 'Inactive: 476248 kB' 'Active(anon): 9191308 kB' 'Inactive(anon): 0 kB' 'Active(file): 613024 kB' 'Inactive(file): 476248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 643692 kB' 'Mapped: 195096 kB' 'Shmem: 8550604 kB' 'KReclaimable: 479024 kB' 'Slab: 1097676 kB' 'SReclaimable: 479024 kB' 'SUnreclaim: 618652 kB' 'KernelStack: 19936 kB' 'PageTables: 9348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 56066964 kB' 'Committed_AS: 10654024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212504 kB' 'VmallocChunk: 0 kB' 'Percpu: 109824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3163092 kB' 'DirectMap2M: 31119360 kB' 'DirectMap1G: 67108864 kB' 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.809 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.810 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97453872 kB' 'MemFree: 81684912 kB' 'MemAvailable: 82396368 kB' 'Buffers: 1308 kB' 'Cached: 9638588 kB' 'SwapCached: 0 kB' 'Active: 9803352 kB' 'Inactive: 476248 kB' 'Active(anon): 9190328 kB' 'Inactive(anon): 0 kB' 'Active(file): 613024 kB' 'Inactive(file): 476248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 643064 kB' 'Mapped: 195088 kB' 'Shmem: 8550624 kB' 'KReclaimable: 479024 kB' 'Slab: 1097660 kB' 'SReclaimable: 479024 kB' 'SUnreclaim: 618636 kB' 'KernelStack: 19936 kB' 'PageTables: 9288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 56066964 kB' 'Committed_AS: 10654048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212600 kB' 'VmallocChunk: 0 kB' 'Percpu: 109824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3163092 kB' 'DirectMap2M: 31119360 kB' 'DirectMap1G: 67108864 kB' 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.811 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.812 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.812 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.812 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.812 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.812 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.812 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.812 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.812 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.812 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.812 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.812 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.812 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.812 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.812 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.812 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.812 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.812 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.812 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.812 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.812 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.812 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.812 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.812 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.812 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.812 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.812 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.812 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.812 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.812 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.812 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.812 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.812 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.812 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.812 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.812 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.812 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.812 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.812 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.812 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.812 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.812 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.812 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.812 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.812 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.812 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.812 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.812 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.812 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.812 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.812 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.812 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.812 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.812 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.812 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.812 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.812 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.812 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.812 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.812 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.812 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.812 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.812 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.812 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.812 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.812 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.812 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.812 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.812 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:06.813 nr_hugepages=1024 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:06.813 resv_hugepages=0 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:06.813 surplus_hugepages=0 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:06.813 anon_hugepages=0 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97453872 kB' 'MemFree: 81683764 kB' 'MemAvailable: 82395220 kB' 'Buffers: 1308 kB' 'Cached: 9638608 kB' 'SwapCached: 0 kB' 'Active: 9803536 kB' 'Inactive: 476248 kB' 'Active(anon): 9190512 kB' 'Inactive(anon): 0 kB' 'Active(file): 613024 kB' 'Inactive(file): 476248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 643180 kB' 'Mapped: 195088 kB' 'Shmem: 8550644 kB' 'KReclaimable: 479024 kB' 'Slab: 1097692 kB' 'SReclaimable: 479024 kB' 'SUnreclaim: 618668 kB' 'KernelStack: 19856 kB' 'PageTables: 9420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 56066964 kB' 'Committed_AS: 10652808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212504 kB' 'VmallocChunk: 0 kB' 'Percpu: 109824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3163092 kB' 'DirectMap2M: 31119360 kB' 'DirectMap1G: 67108864 kB' 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.813 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.814 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48118528 kB' 'MemFree: 36892336 kB' 'MemUsed: 11226192 kB' 'SwapCached: 0 kB' 'Active: 7115600 kB' 'Inactive: 152112 kB' 'Active(anon): 6615728 kB' 'Inactive(anon): 0 kB' 'Active(file): 499872 kB' 'Inactive(file): 152112 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7082064 kB' 'Mapped: 157188 kB' 'AnonPages: 188836 kB' 'Shmem: 6430080 kB' 'KernelStack: 11832 kB' 'PageTables: 5460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 322180 kB' 'Slab: 712900 kB' 'SReclaimable: 322180 kB' 'SUnreclaim: 390720 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.815 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.816 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.817 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:06.817 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:06.817 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:06.817 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:06.817 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:06.817 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:06.817 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:06.817 node0=1024 expecting 1024 00:03:06.817 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:06.817 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:06.817 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:06.817 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:06.817 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:06.817 20:01:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:10.106 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:10.106 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:10.106 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:10.106 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:10.106 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:10.106 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:10.106 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:10.106 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:10.106 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:10.106 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:10.106 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:10.106 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:10.106 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:10.106 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:10.106 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:10.106 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:10.106 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:10.106 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:10.106 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:10.106 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:10.106 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:10.106 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:10.106 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:10.106 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:10.106 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:10.106 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:10.106 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:10.106 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:10.106 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97453872 kB' 'MemFree: 81699120 kB' 'MemAvailable: 82410576 kB' 'Buffers: 1308 kB' 'Cached: 9638696 kB' 'SwapCached: 0 kB' 'Active: 9804520 kB' 'Inactive: 476248 kB' 'Active(anon): 9191496 kB' 'Inactive(anon): 0 kB' 'Active(file): 613024 kB' 'Inactive(file): 476248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 643992 kB' 'Mapped: 195236 kB' 'Shmem: 8550732 kB' 'KReclaimable: 479024 kB' 'Slab: 1098064 kB' 'SReclaimable: 479024 kB' 'SUnreclaim: 619040 kB' 'KernelStack: 19808 kB' 'PageTables: 9128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 56066964 kB' 'Committed_AS: 10651408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212440 kB' 'VmallocChunk: 0 kB' 'Percpu: 109824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3163092 kB' 'DirectMap2M: 31119360 kB' 'DirectMap1G: 67108864 kB' 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.107 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97453872 kB' 'MemFree: 81700092 kB' 'MemAvailable: 82411548 kB' 'Buffers: 1308 kB' 'Cached: 9638708 kB' 'SwapCached: 0 kB' 'Active: 9803480 kB' 'Inactive: 476248 kB' 'Active(anon): 9190456 kB' 'Inactive(anon): 0 kB' 'Active(file): 613024 kB' 'Inactive(file): 476248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 642940 kB' 'Mapped: 195088 kB' 'Shmem: 8550744 kB' 'KReclaimable: 479024 kB' 'Slab: 1098028 kB' 'SReclaimable: 479024 kB' 'SUnreclaim: 619004 kB' 'KernelStack: 19696 kB' 'PageTables: 8792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 56066964 kB' 'Committed_AS: 10651928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212408 kB' 'VmallocChunk: 0 kB' 'Percpu: 109824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3163092 kB' 'DirectMap2M: 31119360 kB' 'DirectMap1G: 67108864 kB' 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.108 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.109 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97453872 kB' 'MemFree: 81699336 kB' 'MemAvailable: 82410792 kB' 'Buffers: 1308 kB' 'Cached: 9638708 kB' 'SwapCached: 0 kB' 'Active: 9803124 kB' 'Inactive: 476248 kB' 'Active(anon): 9190100 kB' 'Inactive(anon): 0 kB' 'Active(file): 613024 kB' 'Inactive(file): 476248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 642608 kB' 'Mapped: 195088 kB' 'Shmem: 8550744 kB' 'KReclaimable: 479024 kB' 'Slab: 1098028 kB' 'SReclaimable: 479024 kB' 'SUnreclaim: 619004 kB' 'KernelStack: 19680 kB' 'PageTables: 8728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 56066964 kB' 'Committed_AS: 10651948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212408 kB' 'VmallocChunk: 0 kB' 'Percpu: 109824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3163092 kB' 'DirectMap2M: 31119360 kB' 'DirectMap1G: 67108864 kB' 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.110 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.111 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:10.112 nr_hugepages=1024 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:10.112 resv_hugepages=0 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:10.112 surplus_hugepages=0 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:10.112 anon_hugepages=0 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97453872 kB' 'MemFree: 81699428 kB' 'MemAvailable: 82410884 kB' 'Buffers: 1308 kB' 'Cached: 9638748 kB' 'SwapCached: 0 kB' 'Active: 9803516 kB' 'Inactive: 476248 kB' 'Active(anon): 9190492 kB' 'Inactive(anon): 0 kB' 'Active(file): 613024 kB' 'Inactive(file): 476248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 642968 kB' 'Mapped: 195088 kB' 'Shmem: 8550784 kB' 'KReclaimable: 479024 kB' 'Slab: 1098028 kB' 'SReclaimable: 479024 kB' 'SUnreclaim: 619004 kB' 'KernelStack: 19696 kB' 'PageTables: 8772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 56066964 kB' 'Committed_AS: 10651972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212408 kB' 'VmallocChunk: 0 kB' 'Percpu: 109824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3163092 kB' 'DirectMap2M: 31119360 kB' 'DirectMap1G: 67108864 kB' 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.112 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.113 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48118528 kB' 'MemFree: 36896012 kB' 'MemUsed: 11222516 kB' 'SwapCached: 0 kB' 'Active: 7115788 kB' 'Inactive: 152112 kB' 'Active(anon): 6615916 kB' 'Inactive(anon): 0 kB' 'Active(file): 499872 kB' 'Inactive(file): 152112 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7082096 kB' 'Mapped: 157192 kB' 'AnonPages: 188976 kB' 'Shmem: 6430112 kB' 'KernelStack: 11672 kB' 'PageTables: 5012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 322180 kB' 'Slab: 712952 kB' 'SReclaimable: 322180 kB' 'SUnreclaim: 390772 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.114 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:10.115 node0=1024 expecting 1024 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:10.115 00:03:10.115 real 0m6.052s 00:03:10.115 user 0m2.401s 00:03:10.115 sys 0m3.781s 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:10.115 20:01:56 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:10.115 ************************************ 00:03:10.115 END TEST no_shrink_alloc 00:03:10.115 ************************************ 00:03:10.115 20:01:56 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:10.115 20:01:56 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:10.115 20:01:56 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:10.115 20:01:56 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:10.115 20:01:56 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:10.115 20:01:56 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:10.115 20:01:56 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:10.115 20:01:56 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:10.115 20:01:56 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:10.115 20:01:56 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:10.115 20:01:56 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:10.115 20:01:56 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:10.116 20:01:56 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:10.116 20:01:56 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:10.116 00:03:10.116 real 0m22.396s 00:03:10.116 user 0m8.672s 00:03:10.116 sys 0m13.229s 00:03:10.116 20:01:56 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:10.116 20:01:56 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:10.116 ************************************ 00:03:10.116 END TEST hugepages 00:03:10.116 ************************************ 00:03:10.116 20:01:57 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:03:10.116 20:01:57 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:10.116 20:01:57 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:10.116 20:01:57 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:10.116 ************************************ 00:03:10.116 START TEST driver 00:03:10.116 ************************************ 00:03:10.116 20:01:57 setup.sh.driver -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:03:10.116 * Looking for test storage... 00:03:10.116 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:10.116 20:01:57 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:10.116 20:01:57 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:10.116 20:01:57 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:14.379 20:02:01 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:14.379 20:02:01 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:14.379 20:02:01 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:14.379 20:02:01 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:14.379 ************************************ 00:03:14.379 START TEST guess_driver 00:03:14.379 ************************************ 00:03:14.379 20:02:01 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:03:14.379 20:02:01 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:14.379 20:02:01 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:14.379 20:02:01 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:14.379 20:02:01 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:14.379 20:02:01 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:14.379 20:02:01 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:14.379 20:02:01 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:14.379 20:02:01 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:14.379 20:02:01 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:14.379 20:02:01 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 168 > 0 )) 00:03:14.379 20:02:01 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:14.379 20:02:01 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:14.379 20:02:01 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:14.379 20:02:01 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:14.379 20:02:01 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:14.379 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:14.379 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:14.379 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:14.379 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:14.379 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:14.379 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:14.379 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:14.379 20:02:01 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:14.379 20:02:01 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:14.379 20:02:01 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:14.379 20:02:01 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:14.379 20:02:01 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:14.379 Looking for driver=vfio-pci 00:03:14.379 20:02:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:14.379 20:02:01 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:14.379 20:02:01 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:14.379 20:02:01 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:17.671 20:02:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:17.671 20:02:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:17.671 20:02:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:17.671 20:02:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:17.671 20:02:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:17.671 20:02:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:17.671 20:02:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:17.671 20:02:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:17.671 20:02:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:17.671 20:02:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:17.671 20:02:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:17.671 20:02:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:17.671 20:02:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:17.671 20:02:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:17.671 20:02:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:17.671 20:02:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:17.671 20:02:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:17.671 20:02:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:17.671 20:02:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:17.671 20:02:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:17.671 20:02:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:17.671 20:02:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:17.671 20:02:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:17.671 20:02:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:17.671 20:02:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:17.671 20:02:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:17.671 20:02:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:17.671 20:02:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:17.671 20:02:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:17.671 20:02:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:17.671 20:02:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:17.671 20:02:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:17.671 20:02:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:17.671 20:02:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:17.671 20:02:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:17.671 20:02:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:17.671 20:02:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:17.671 20:02:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:17.671 20:02:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:17.671 20:02:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:17.671 20:02:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:17.671 20:02:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:17.671 20:02:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:17.671 20:02:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:17.671 20:02:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:17.671 20:02:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:17.671 20:02:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:17.671 20:02:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:18.239 20:02:05 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:18.239 20:02:05 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:18.239 20:02:05 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:18.239 20:02:05 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:18.239 20:02:05 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:18.239 20:02:05 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:18.498 20:02:05 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:18.498 20:02:05 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:18.498 20:02:05 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:18.498 20:02:05 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:18.498 20:02:05 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:18.498 20:02:05 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:18.498 20:02:05 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:23.771 00:03:23.772 real 0m8.527s 00:03:23.772 user 0m2.464s 00:03:23.772 sys 0m4.250s 00:03:23.772 20:02:09 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:23.772 20:02:09 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:23.772 ************************************ 00:03:23.772 END TEST guess_driver 00:03:23.772 ************************************ 00:03:23.772 00:03:23.772 real 0m12.817s 00:03:23.772 user 0m3.612s 00:03:23.772 sys 0m6.393s 00:03:23.772 20:02:09 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:23.772 20:02:09 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:23.772 ************************************ 00:03:23.772 END TEST driver 00:03:23.772 ************************************ 00:03:23.772 20:02:09 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:03:23.772 20:02:09 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:23.772 20:02:09 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:23.772 20:02:09 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:23.772 ************************************ 00:03:23.772 START TEST devices 00:03:23.772 ************************************ 00:03:23.772 20:02:09 setup.sh.devices -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:03:23.772 * Looking for test storage... 00:03:23.772 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:23.772 20:02:10 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:23.772 20:02:10 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:23.772 20:02:10 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:23.772 20:02:10 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:26.305 20:02:13 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:26.305 20:02:13 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:03:26.305 20:02:13 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:03:26.305 20:02:13 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:03:26.305 20:02:13 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:03:26.305 20:02:13 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:03:26.305 20:02:13 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:03:26.305 20:02:13 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:26.305 20:02:13 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:03:26.305 20:02:13 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:03:26.305 20:02:13 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:03:26.305 20:02:13 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:03:26.305 20:02:13 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:26.305 20:02:13 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:03:26.305 20:02:13 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:03:26.305 20:02:13 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n1 00:03:26.305 20:02:13 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme2n1 00:03:26.305 20:02:13 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:26.305 20:02:13 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:03:26.305 20:02:13 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:26.305 20:02:13 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:26.305 20:02:13 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:26.305 20:02:13 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:26.305 20:02:13 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:26.305 20:02:13 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:26.305 20:02:13 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:26.305 20:02:13 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:26.305 20:02:13 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5f:00.0 00:03:26.305 20:02:13 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\f\:\0\0\.\0* ]] 00:03:26.305 20:02:13 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:26.305 20:02:13 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:26.305 20:02:13 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:26.305 No valid GPT data, bailing 00:03:26.305 20:02:13 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:26.305 20:02:13 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:26.305 20:02:13 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:26.305 20:02:13 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:26.305 20:02:13 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:26.305 20:02:13 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:26.305 20:02:13 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:03:26.305 20:02:13 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:26.305 20:02:13 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:26.305 20:02:13 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5f:00.0 00:03:26.305 20:02:13 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:26.305 20:02:13 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:03:26.305 20:02:13 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:26.305 20:02:13 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:03:26.305 20:02:13 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:03:26.305 20:02:13 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:03:26.305 20:02:13 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:03:26.305 20:02:13 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:03:26.564 No valid GPT data, bailing 00:03:26.564 20:02:13 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:26.564 20:02:13 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:26.564 20:02:13 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:26.564 20:02:13 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:03:26.564 20:02:13 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:03:26.564 20:02:13 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:03:26.564 20:02:13 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:03:26.564 20:02:13 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:26.564 20:02:13 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:26.564 20:02:13 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:03:26.564 20:02:13 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:26.564 20:02:13 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:03:26.564 20:02:13 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:03:26.564 20:02:13 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:03:26.564 20:02:13 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:03:26.564 20:02:13 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:03:26.564 20:02:13 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n1 pt 00:03:26.564 20:02:13 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme2n1 00:03:26.564 No valid GPT data, bailing 00:03:26.564 20:02:13 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:03:26.564 20:02:13 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:26.564 20:02:13 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:26.564 20:02:13 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:03:26.564 20:02:13 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n1 00:03:26.564 20:02:13 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:03:26.564 20:02:13 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:03:26.564 20:02:13 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:26.564 20:02:13 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:26.564 20:02:13 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:03:26.564 20:02:13 setup.sh.devices -- setup/devices.sh@209 -- # (( 3 > 0 )) 00:03:26.564 20:02:13 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:26.564 20:02:13 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:26.564 20:02:13 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:26.564 20:02:13 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:26.564 20:02:13 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:26.564 ************************************ 00:03:26.564 START TEST nvme_mount 00:03:26.564 ************************************ 00:03:26.564 20:02:13 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:03:26.564 20:02:13 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:26.564 20:02:13 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:26.564 20:02:13 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:26.564 20:02:13 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:26.564 20:02:13 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:26.564 20:02:13 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:26.564 20:02:13 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:26.564 20:02:13 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:26.564 20:02:13 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:26.564 20:02:13 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:26.564 20:02:13 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:26.564 20:02:13 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:26.564 20:02:13 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:26.564 20:02:13 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:26.564 20:02:13 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:26.564 20:02:13 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:26.564 20:02:13 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:26.564 20:02:13 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:26.565 20:02:13 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:27.500 Creating new GPT entries in memory. 00:03:27.500 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:27.500 other utilities. 00:03:27.500 20:02:14 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:27.500 20:02:14 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:27.500 20:02:14 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:27.500 20:02:14 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:27.500 20:02:14 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:28.876 Creating new GPT entries in memory. 00:03:28.876 The operation has completed successfully. 00:03:28.876 20:02:15 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:28.876 20:02:15 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:28.876 20:02:15 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1627230 00:03:28.876 20:02:15 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:28.876 20:02:15 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:28.876 20:02:15 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:28.876 20:02:15 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:28.876 20:02:15 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:28.876 20:02:15 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:28.876 20:02:15 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:5f:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:28.876 20:02:15 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5f:00.0 00:03:28.876 20:02:15 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:28.876 20:02:15 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:28.876 20:02:15 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:28.876 20:02:15 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:28.876 20:02:15 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:28.876 20:02:15 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:28.876 20:02:15 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:28.876 20:02:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.876 20:02:15 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5f:00.0 00:03:28.876 20:02:15 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:28.877 20:02:15 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:28.877 20:02:15 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:30.781 20:02:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:30.781 20:02:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:30.781 20:02:17 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:30.781 20:02:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.781 20:02:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:30.781 20:02:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.040 20:02:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:31.040 20:02:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.040 20:02:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:31.040 20:02:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.040 20:02:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:31.040 20:02:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.040 20:02:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:31.040 20:02:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.040 20:02:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:31.040 20:02:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.040 20:02:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:31.040 20:02:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.040 20:02:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:31.040 20:02:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.040 20:02:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:31.040 20:02:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.040 20:02:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:31.040 20:02:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.040 20:02:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:31.040 20:02:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.040 20:02:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:31.040 20:02:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.040 20:02:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:31.040 20:02:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.040 20:02:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:31.040 20:02:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.040 20:02:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:31.040 20:02:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.040 20:02:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:31.040 20:02:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.040 20:02:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:31.040 20:02:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.041 20:02:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:31.041 20:02:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.300 20:02:18 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:31.300 20:02:18 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:31.300 20:02:18 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:31.300 20:02:18 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:31.300 20:02:18 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:31.300 20:02:18 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:31.300 20:02:18 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:31.300 20:02:18 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:31.300 20:02:18 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:31.300 20:02:18 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:31.300 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:31.300 20:02:18 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:31.300 20:02:18 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:31.559 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:31.559 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:31.559 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:31.559 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:31.559 20:02:18 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:31.559 20:02:18 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:31.559 20:02:18 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:31.559 20:02:18 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:31.559 20:02:18 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:31.559 20:02:18 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:31.559 20:02:18 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:5f:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:31.559 20:02:18 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5f:00.0 00:03:31.559 20:02:18 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:31.559 20:02:18 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:31.559 20:02:18 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:31.559 20:02:18 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:31.559 20:02:18 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:31.559 20:02:18 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:31.559 20:02:18 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:31.559 20:02:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.559 20:02:18 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5f:00.0 00:03:31.559 20:02:18 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:31.559 20:02:18 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:31.559 20:02:18 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:34.095 20:02:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:34.095 20:02:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:34.095 20:02:20 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:34.095 20:02:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.095 20:02:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:34.095 20:02:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.095 20:02:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:34.095 20:02:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.095 20:02:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:34.095 20:02:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.095 20:02:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:34.095 20:02:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.095 20:02:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:34.095 20:02:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.095 20:02:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:34.095 20:02:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.095 20:02:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:34.095 20:02:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.095 20:02:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:34.095 20:02:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.095 20:02:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:34.095 20:02:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.095 20:02:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:34.095 20:02:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.095 20:02:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:34.095 20:02:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.095 20:02:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:34.095 20:02:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.095 20:02:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:34.095 20:02:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.095 20:02:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:34.095 20:02:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.095 20:02:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:34.095 20:02:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.095 20:02:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:34.095 20:02:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.095 20:02:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:34.095 20:02:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.095 20:02:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:34.095 20:02:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.377 20:02:21 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:34.377 20:02:21 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:34.377 20:02:21 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:34.377 20:02:21 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:34.377 20:02:21 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:34.378 20:02:21 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:34.378 20:02:21 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:5f:00.0 data@nvme0n1 '' '' 00:03:34.378 20:02:21 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5f:00.0 00:03:34.378 20:02:21 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:34.378 20:02:21 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:34.378 20:02:21 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:34.378 20:02:21 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:34.378 20:02:21 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:34.378 20:02:21 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:34.378 20:02:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.378 20:02:21 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5f:00.0 00:03:34.378 20:02:21 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:34.378 20:02:21 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:34.378 20:02:21 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:36.997 20:02:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:36.997 20:02:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:36.997 20:02:23 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:36.997 20:02:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.997 20:02:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:36.997 20:02:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.997 20:02:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:36.997 20:02:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.997 20:02:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:36.997 20:02:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.997 20:02:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:36.997 20:02:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.997 20:02:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:36.997 20:02:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.997 20:02:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:36.997 20:02:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.997 20:02:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:36.997 20:02:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.997 20:02:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:36.997 20:02:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.997 20:02:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:36.997 20:02:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.997 20:02:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:36.997 20:02:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.997 20:02:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:36.997 20:02:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.997 20:02:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:36.997 20:02:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.997 20:02:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:36.997 20:02:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.997 20:02:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:36.997 20:02:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.997 20:02:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:36.997 20:02:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.997 20:02:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:36.997 20:02:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.997 20:02:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:36.997 20:02:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.997 20:02:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:36.997 20:02:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.257 20:02:24 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:37.257 20:02:24 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:37.257 20:02:24 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:37.257 20:02:24 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:37.257 20:02:24 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:37.257 20:02:24 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:37.257 20:02:24 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:37.257 20:02:24 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:37.257 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:37.257 00:03:37.257 real 0m10.705s 00:03:37.257 user 0m3.062s 00:03:37.257 sys 0m5.315s 00:03:37.257 20:02:24 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:37.257 20:02:24 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:37.257 ************************************ 00:03:37.257 END TEST nvme_mount 00:03:37.257 ************************************ 00:03:37.257 20:02:24 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:37.257 20:02:24 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:37.257 20:02:24 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:37.257 20:02:24 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:37.257 ************************************ 00:03:37.257 START TEST dm_mount 00:03:37.257 ************************************ 00:03:37.257 20:02:24 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:03:37.257 20:02:24 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:37.257 20:02:24 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:37.257 20:02:24 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:37.257 20:02:24 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:37.257 20:02:24 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:37.257 20:02:24 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:37.257 20:02:24 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:37.257 20:02:24 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:37.257 20:02:24 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:37.257 20:02:24 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:37.257 20:02:24 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:37.257 20:02:24 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:37.257 20:02:24 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:37.257 20:02:24 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:37.257 20:02:24 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:37.257 20:02:24 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:37.257 20:02:24 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:37.257 20:02:24 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:37.257 20:02:24 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:37.257 20:02:24 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:37.257 20:02:24 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:38.633 Creating new GPT entries in memory. 00:03:38.633 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:38.633 other utilities. 00:03:38.633 20:02:25 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:38.633 20:02:25 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:38.633 20:02:25 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:38.633 20:02:25 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:38.633 20:02:25 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:39.569 Creating new GPT entries in memory. 00:03:39.569 The operation has completed successfully. 00:03:39.569 20:02:26 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:39.569 20:02:26 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:39.569 20:02:26 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:39.569 20:02:26 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:39.569 20:02:26 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:40.504 The operation has completed successfully. 00:03:40.504 20:02:27 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:40.504 20:02:27 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:40.504 20:02:27 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1631469 00:03:40.504 20:02:27 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:40.504 20:02:27 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:03:40.504 20:02:27 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:40.504 20:02:27 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:40.504 20:02:27 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:40.504 20:02:27 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:40.504 20:02:27 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:40.504 20:02:27 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:40.504 20:02:27 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:40.504 20:02:27 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:40.504 20:02:27 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:03:40.504 20:02:27 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:40.504 20:02:27 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:40.504 20:02:27 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:03:40.504 20:02:27 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount size= 00:03:40.504 20:02:27 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:03:40.504 20:02:27 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:40.504 20:02:27 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:40.504 20:02:27 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:03:40.504 20:02:27 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:5f:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:40.504 20:02:27 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5f:00.0 00:03:40.504 20:02:27 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:40.504 20:02:27 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:03:40.504 20:02:27 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:40.504 20:02:27 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:40.504 20:02:27 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:40.504 20:02:27 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:40.504 20:02:27 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:40.504 20:02:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.504 20:02:27 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5f:00.0 00:03:40.504 20:02:27 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:40.504 20:02:27 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:40.504 20:02:27 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:43.035 20:02:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:43.036 20:02:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:43.036 20:02:29 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:43.036 20:02:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.036 20:02:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:43.036 20:02:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.036 20:02:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:43.036 20:02:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.036 20:02:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:43.036 20:02:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.036 20:02:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:43.036 20:02:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.036 20:02:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:43.036 20:02:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.036 20:02:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:43.036 20:02:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.036 20:02:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:43.036 20:02:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.036 20:02:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:43.036 20:02:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.036 20:02:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:43.036 20:02:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.036 20:02:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:43.036 20:02:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.036 20:02:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:43.036 20:02:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.036 20:02:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:43.036 20:02:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.036 20:02:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:43.036 20:02:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.036 20:02:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:43.036 20:02:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.036 20:02:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:43.036 20:02:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.036 20:02:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:43.036 20:02:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.036 20:02:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:43.036 20:02:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.036 20:02:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:43.036 20:02:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.295 20:02:30 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:43.295 20:02:30 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:43.295 20:02:30 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:03:43.295 20:02:30 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:43.295 20:02:30 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:43.295 20:02:30 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:03:43.295 20:02:30 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:5f:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:43.295 20:02:30 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5f:00.0 00:03:43.295 20:02:30 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:43.295 20:02:30 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:43.295 20:02:30 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:43.295 20:02:30 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:43.295 20:02:30 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:43.295 20:02:30 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:43.295 20:02:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.295 20:02:30 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5f:00.0 00:03:43.295 20:02:30 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:43.295 20:02:30 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.295 20:02:30 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:45.851 20:02:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:45.851 20:02:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:45.851 20:02:32 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:45.851 20:02:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.851 20:02:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:45.851 20:02:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.851 20:02:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:45.851 20:02:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.851 20:02:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:45.851 20:02:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.851 20:02:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:45.851 20:02:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.851 20:02:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:45.851 20:02:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.851 20:02:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:45.851 20:02:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.851 20:02:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:45.851 20:02:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.851 20:02:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:45.851 20:02:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.851 20:02:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:45.851 20:02:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.851 20:02:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:45.851 20:02:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.851 20:02:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:45.851 20:02:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.851 20:02:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:45.851 20:02:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.851 20:02:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:45.851 20:02:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.851 20:02:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:45.851 20:02:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.851 20:02:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:45.851 20:02:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.851 20:02:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:45.851 20:02:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.851 20:02:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:45.851 20:02:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.851 20:02:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:45.851 20:02:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.109 20:02:33 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:46.109 20:02:33 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:46.109 20:02:33 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:46.109 20:02:33 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:46.109 20:02:33 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:03:46.109 20:02:33 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:46.109 20:02:33 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:46.109 20:02:33 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:46.109 20:02:33 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:46.109 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:46.109 20:02:33 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:46.109 20:02:33 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:46.109 00:03:46.109 real 0m8.866s 00:03:46.109 user 0m2.047s 00:03:46.109 sys 0m3.680s 00:03:46.109 20:02:33 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:46.109 20:02:33 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:46.109 ************************************ 00:03:46.109 END TEST dm_mount 00:03:46.109 ************************************ 00:03:46.368 20:02:33 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:46.368 20:02:33 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:46.368 20:02:33 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:46.368 20:02:33 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:46.368 20:02:33 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:46.368 20:02:33 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:46.368 20:02:33 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:46.627 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:46.627 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:46.627 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:46.627 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:46.627 20:02:33 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:46.627 20:02:33 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:03:46.627 20:02:33 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:46.627 20:02:33 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:46.627 20:02:33 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:46.627 20:02:33 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:46.627 20:02:33 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:46.627 00:03:46.627 real 0m23.599s 00:03:46.627 user 0m6.590s 00:03:46.627 sys 0m11.396s 00:03:46.627 20:02:33 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:46.627 20:02:33 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:46.627 ************************************ 00:03:46.627 END TEST devices 00:03:46.627 ************************************ 00:03:46.627 00:03:46.627 real 1m19.924s 00:03:46.627 user 0m25.548s 00:03:46.627 sys 0m42.849s 00:03:46.627 20:02:33 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:46.627 20:02:33 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:46.628 ************************************ 00:03:46.628 END TEST setup.sh 00:03:46.628 ************************************ 00:03:46.628 20:02:33 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:03:49.160 Hugepages 00:03:49.160 node hugesize free / total 00:03:49.160 node0 1048576kB 0 / 0 00:03:49.160 node0 2048kB 2048 / 2048 00:03:49.160 node1 1048576kB 0 / 0 00:03:49.160 node1 2048kB 0 / 0 00:03:49.160 00:03:49.160 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:49.160 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:49.160 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:49.160 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:49.160 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:49.160 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:49.160 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:49.160 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:49.160 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:49.419 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme1 nvme1n1 00:03:49.419 NVMe 0000:5f:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:49.419 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:49.419 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:49.419 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:49.419 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:49.419 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:49.419 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:49.420 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:49.420 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:49.420 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme2 nvme2n1 00:03:49.420 20:02:36 -- spdk/autotest.sh@130 -- # uname -s 00:03:49.420 20:02:36 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:49.420 20:02:36 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:49.420 20:02:36 -- common/autotest_common.sh@1527 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:52.708 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:52.708 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:52.708 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:52.708 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:52.708 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:52.708 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:52.708 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:52.708 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:52.708 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:52.708 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:52.708 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:52.708 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:52.708 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:52.708 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:52.708 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:52.708 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:53.276 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:53.276 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:03:53.276 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:53.534 20:02:40 -- common/autotest_common.sh@1528 -- # sleep 1 00:03:54.473 20:02:41 -- common/autotest_common.sh@1529 -- # bdfs=() 00:03:54.473 20:02:41 -- common/autotest_common.sh@1529 -- # local bdfs 00:03:54.473 20:02:41 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:03:54.473 20:02:41 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:03:54.473 20:02:41 -- common/autotest_common.sh@1509 -- # bdfs=() 00:03:54.473 20:02:41 -- common/autotest_common.sh@1509 -- # local bdfs 00:03:54.473 20:02:41 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:54.473 20:02:41 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:54.473 20:02:41 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:03:54.473 20:02:41 -- common/autotest_common.sh@1511 -- # (( 3 == 0 )) 00:03:54.473 20:02:41 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:5e:00.0 0000:5f:00.0 0000:d8:00.0 00:03:54.473 20:02:41 -- common/autotest_common.sh@1532 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:57.002 Waiting for block devices as requested 00:03:57.002 0000:5f:00.0 (8086 0a54): vfio-pci -> nvme 00:03:57.002 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:03:57.002 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:57.260 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:57.260 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:57.260 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:57.260 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:57.519 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:57.519 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:57.519 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:57.519 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:57.776 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:57.776 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:57.776 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:58.034 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:58.034 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:58.034 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:58.292 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:58.292 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:03:58.292 20:02:45 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:03:58.292 20:02:45 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:03:58.292 20:02:45 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 00:03:58.292 20:02:45 -- common/autotest_common.sh@1498 -- # grep 0000:5e:00.0/nvme/nvme 00:03:58.292 20:02:45 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme1 00:03:58.292 20:02:45 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme1 ]] 00:03:58.292 20:02:45 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme1 00:03:58.292 20:02:45 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme1 00:03:58.292 20:02:45 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme1 00:03:58.292 20:02:45 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme1 ]] 00:03:58.293 20:02:45 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme1 00:03:58.293 20:02:45 -- common/autotest_common.sh@1541 -- # grep oacs 00:03:58.293 20:02:45 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:03:58.293 20:02:45 -- common/autotest_common.sh@1541 -- # oacs=' 0xe' 00:03:58.293 20:02:45 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:03:58.293 20:02:45 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:03:58.293 20:02:45 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme1 00:03:58.293 20:02:45 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:03:58.293 20:02:45 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:03:58.293 20:02:45 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:03:58.293 20:02:45 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:03:58.293 20:02:45 -- common/autotest_common.sh@1553 -- # continue 00:03:58.293 20:02:45 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:03:58.551 20:02:45 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:5f:00.0 00:03:58.551 20:02:45 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 00:03:58.551 20:02:45 -- common/autotest_common.sh@1498 -- # grep 0000:5f:00.0/nvme/nvme 00:03:58.551 20:02:45 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:03.0/0000:5f:00.0/nvme/nvme0 00:03:58.551 20:02:45 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:03.0/0000:5f:00.0/nvme/nvme0 ]] 00:03:58.551 20:02:45 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:5d/0000:5d:03.0/0000:5f:00.0/nvme/nvme0 00:03:58.551 20:02:45 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:03:58.551 20:02:45 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:03:58.551 20:02:45 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:03:58.551 20:02:45 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:03:58.551 20:02:45 -- common/autotest_common.sh@1541 -- # grep oacs 00:03:58.551 20:02:45 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:03:58.551 20:02:45 -- common/autotest_common.sh@1541 -- # oacs=' 0xf' 00:03:58.551 20:02:45 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:03:58.551 20:02:45 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:03:58.551 20:02:45 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:03:58.551 20:02:45 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:03:58.551 20:02:45 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:03:58.551 20:02:45 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:03:58.551 20:02:45 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:03:58.551 20:02:45 -- common/autotest_common.sh@1553 -- # continue 00:03:58.551 20:02:45 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:03:58.551 20:02:45 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:03:58.551 20:02:45 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 00:03:58.551 20:02:45 -- common/autotest_common.sh@1498 -- # grep 0000:d8:00.0/nvme/nvme 00:03:58.551 20:02:45 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme2 00:03:58.551 20:02:45 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme2 ]] 00:03:58.551 20:02:45 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme2 00:03:58.551 20:02:45 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme2 00:03:58.551 20:02:45 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme2 00:03:58.551 20:02:45 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme2 ]] 00:03:58.552 20:02:45 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme2 00:03:58.552 20:02:45 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:03:58.552 20:02:45 -- common/autotest_common.sh@1541 -- # grep oacs 00:03:58.552 20:02:45 -- common/autotest_common.sh@1541 -- # oacs=' 0xf' 00:03:58.552 20:02:45 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:03:58.552 20:02:45 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:03:58.552 20:02:45 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme2 00:03:58.552 20:02:45 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:03:58.552 20:02:45 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:03:58.552 20:02:45 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:03:58.552 20:02:45 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:03:58.552 20:02:45 -- common/autotest_common.sh@1553 -- # continue 00:03:58.552 20:02:45 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:58.552 20:02:45 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:58.552 20:02:45 -- common/autotest_common.sh@10 -- # set +x 00:03:58.552 20:02:45 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:58.552 20:02:45 -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:58.552 20:02:45 -- common/autotest_common.sh@10 -- # set +x 00:03:58.552 20:02:45 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:01.840 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:01.840 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:01.840 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:01.840 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:01.840 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:01.840 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:01.840 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:01.840 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:01.840 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:01.840 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:01.840 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:01.840 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:01.840 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:01.840 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:01.840 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:01.840 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:02.406 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:02.406 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:04:02.665 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:02.665 20:02:49 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:02.665 20:02:49 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:02.665 20:02:49 -- common/autotest_common.sh@10 -- # set +x 00:04:02.665 20:02:49 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:02.665 20:02:49 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:04:02.665 20:02:49 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:04:02.665 20:02:49 -- common/autotest_common.sh@1573 -- # bdfs=() 00:04:02.665 20:02:49 -- common/autotest_common.sh@1573 -- # local bdfs 00:04:02.665 20:02:49 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:04:02.665 20:02:49 -- common/autotest_common.sh@1509 -- # bdfs=() 00:04:02.665 20:02:49 -- common/autotest_common.sh@1509 -- # local bdfs 00:04:02.665 20:02:49 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:02.665 20:02:49 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:02.665 20:02:49 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:04:02.924 20:02:49 -- common/autotest_common.sh@1511 -- # (( 3 == 0 )) 00:04:02.924 20:02:49 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:5e:00.0 0000:5f:00.0 0000:d8:00.0 00:04:02.924 20:02:49 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:04:02.924 20:02:49 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:04:02.924 20:02:49 -- common/autotest_common.sh@1576 -- # device=0x0a54 00:04:02.924 20:02:49 -- common/autotest_common.sh@1577 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:02.924 20:02:49 -- common/autotest_common.sh@1578 -- # bdfs+=($bdf) 00:04:02.924 20:02:49 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:04:02.924 20:02:49 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:5f:00.0/device 00:04:02.924 20:02:49 -- common/autotest_common.sh@1576 -- # device=0x0a54 00:04:02.924 20:02:49 -- common/autotest_common.sh@1577 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:02.924 20:02:49 -- common/autotest_common.sh@1578 -- # bdfs+=($bdf) 00:04:02.924 20:02:49 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:04:02.924 20:02:49 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:04:02.924 20:02:49 -- common/autotest_common.sh@1576 -- # device=0x0a54 00:04:02.924 20:02:49 -- common/autotest_common.sh@1577 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:02.924 20:02:49 -- common/autotest_common.sh@1578 -- # bdfs+=($bdf) 00:04:02.924 20:02:49 -- common/autotest_common.sh@1582 -- # printf '%s\n' 0000:5e:00.0 0000:5f:00.0 0000:d8:00.0 00:04:02.924 20:02:49 -- common/autotest_common.sh@1588 -- # [[ -z 0000:5e:00.0 ]] 00:04:02.924 20:02:49 -- common/autotest_common.sh@1593 -- # spdk_tgt_pid=1640796 00:04:02.924 20:02:49 -- common/autotest_common.sh@1594 -- # waitforlisten 1640796 00:04:02.924 20:02:49 -- common/autotest_common.sh@1592 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:02.924 20:02:49 -- common/autotest_common.sh@827 -- # '[' -z 1640796 ']' 00:04:02.924 20:02:49 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:02.924 20:02:49 -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:02.924 20:02:49 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:02.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:02.924 20:02:49 -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:02.924 20:02:49 -- common/autotest_common.sh@10 -- # set +x 00:04:02.924 [2024-05-16 20:02:49.900304] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:04:02.924 [2024-05-16 20:02:49.900390] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1640796 ] 00:04:02.924 EAL: No free 2048 kB hugepages reported on node 1 00:04:02.924 [2024-05-16 20:02:49.955993] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:02.924 [2024-05-16 20:02:50.046586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:03.862 20:02:50 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:03.862 20:02:50 -- common/autotest_common.sh@860 -- # return 0 00:04:03.862 20:02:50 -- common/autotest_common.sh@1596 -- # bdf_id=0 00:04:03.862 20:02:50 -- common/autotest_common.sh@1597 -- # for bdf in "${bdfs[@]}" 00:04:03.862 20:02:50 -- common/autotest_common.sh@1598 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:04:07.154 nvme0n1 00:04:07.154 20:02:53 -- common/autotest_common.sh@1600 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:07.154 [2024-05-16 20:02:53.876516] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:07.154 request: 00:04:07.154 { 00:04:07.154 "nvme_ctrlr_name": "nvme0", 00:04:07.154 "password": "test", 00:04:07.154 "method": "bdev_nvme_opal_revert", 00:04:07.154 "req_id": 1 00:04:07.154 } 00:04:07.154 Got JSON-RPC error response 00:04:07.154 response: 00:04:07.154 { 00:04:07.154 "code": -32602, 00:04:07.154 "message": "Invalid parameters" 00:04:07.154 } 00:04:07.154 20:02:53 -- common/autotest_common.sh@1600 -- # true 00:04:07.154 20:02:53 -- common/autotest_common.sh@1601 -- # (( ++bdf_id )) 00:04:07.154 20:02:53 -- common/autotest_common.sh@1597 -- # for bdf in "${bdfs[@]}" 00:04:07.154 20:02:53 -- common/autotest_common.sh@1598 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme1 -t pcie -a 0000:5f:00.0 00:04:10.443 nvme1n1 00:04:10.443 20:02:56 -- common/autotest_common.sh@1600 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme1 -p test 00:04:10.443 [2024-05-16 20:02:57.026136] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:10.443 [2024-05-16 20:02:57.026167] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:10.443 request: 00:04:10.443 { 00:04:10.443 "nvme_ctrlr_name": "nvme1", 00:04:10.443 "password": "test", 00:04:10.443 "method": "bdev_nvme_opal_revert", 00:04:10.443 "req_id": 1 00:04:10.443 } 00:04:10.443 Got JSON-RPC error response 00:04:10.443 response: 00:04:10.443 { 00:04:10.443 "code": -32603, 00:04:10.443 "message": "Internal error" 00:04:10.443 } 00:04:10.443 20:02:57 -- common/autotest_common.sh@1600 -- # true 00:04:10.443 20:02:57 -- common/autotest_common.sh@1601 -- # (( ++bdf_id )) 00:04:10.443 20:02:57 -- common/autotest_common.sh@1597 -- # for bdf in "${bdfs[@]}" 00:04:10.443 20:02:57 -- common/autotest_common.sh@1598 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme2 -t pcie -a 0000:d8:00.0 00:04:12.978 nvme2n1 00:04:12.978 20:03:00 -- common/autotest_common.sh@1600 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme2 -p test 00:04:13.237 [2024-05-16 20:03:00.177881] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:13.237 [2024-05-16 20:03:00.177922] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:13.237 request: 00:04:13.237 { 00:04:13.237 "nvme_ctrlr_name": "nvme2", 00:04:13.237 "password": "test", 00:04:13.237 "method": "bdev_nvme_opal_revert", 00:04:13.237 "req_id": 1 00:04:13.237 } 00:04:13.237 Got JSON-RPC error response 00:04:13.237 response: 00:04:13.237 { 00:04:13.237 "code": -32603, 00:04:13.237 "message": "Internal error" 00:04:13.237 } 00:04:13.237 20:03:00 -- common/autotest_common.sh@1600 -- # true 00:04:13.237 20:03:00 -- common/autotest_common.sh@1601 -- # (( ++bdf_id )) 00:04:13.237 20:03:00 -- common/autotest_common.sh@1604 -- # killprocess 1640796 00:04:13.237 20:03:00 -- common/autotest_common.sh@946 -- # '[' -z 1640796 ']' 00:04:13.237 20:03:00 -- common/autotest_common.sh@950 -- # kill -0 1640796 00:04:13.237 20:03:00 -- common/autotest_common.sh@951 -- # uname 00:04:13.237 20:03:00 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:13.237 20:03:00 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1640796 00:04:13.237 20:03:00 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:13.237 20:03:00 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:13.237 20:03:00 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1640796' 00:04:13.237 killing process with pid 1640796 00:04:13.237 20:03:00 -- common/autotest_common.sh@965 -- # kill 1640796 00:04:13.237 20:03:00 -- common/autotest_common.sh@970 -- # wait 1640796 00:04:15.897 20:03:02 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:15.897 20:03:02 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:15.897 20:03:02 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:15.897 20:03:02 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:15.897 20:03:02 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:15.897 20:03:02 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:15.897 20:03:02 -- common/autotest_common.sh@10 -- # set +x 00:04:15.897 20:03:02 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:15.897 20:03:02 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:04:15.897 20:03:02 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:15.897 20:03:02 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:15.897 20:03:02 -- common/autotest_common.sh@10 -- # set +x 00:04:15.897 ************************************ 00:04:15.897 START TEST env 00:04:15.897 ************************************ 00:04:15.897 20:03:02 env -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:04:15.897 * Looking for test storage... 00:04:15.897 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env 00:04:15.897 20:03:02 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:04:15.897 20:03:02 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:15.897 20:03:02 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:15.897 20:03:02 env -- common/autotest_common.sh@10 -- # set +x 00:04:15.897 ************************************ 00:04:15.897 START TEST env_memory 00:04:15.897 ************************************ 00:04:15.897 20:03:02 env.env_memory -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:04:15.897 00:04:15.897 00:04:15.897 CUnit - A unit testing framework for C - Version 2.1-3 00:04:15.897 http://cunit.sourceforge.net/ 00:04:15.897 00:04:15.897 00:04:15.897 Suite: memory 00:04:15.897 Test: alloc and free memory map ...[2024-05-16 20:03:02.795911] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:15.897 passed 00:04:15.897 Test: mem map translation ...[2024-05-16 20:03:02.809431] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:15.897 [2024-05-16 20:03:02.809447] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:15.897 [2024-05-16 20:03:02.809499] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:15.897 [2024-05-16 20:03:02.809507] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:15.897 passed 00:04:15.897 Test: mem map registration ...[2024-05-16 20:03:02.833429] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:15.897 [2024-05-16 20:03:02.833447] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:15.897 passed 00:04:15.897 Test: mem map adjacent registrations ...passed 00:04:15.897 00:04:15.897 Run Summary: Type Total Ran Passed Failed Inactive 00:04:15.897 suites 1 1 n/a 0 0 00:04:15.897 tests 4 4 4 0 0 00:04:15.897 asserts 152 152 152 0 n/a 00:04:15.897 00:04:15.897 Elapsed time = 0.091 seconds 00:04:15.897 00:04:15.897 real 0m0.102s 00:04:15.897 user 0m0.089s 00:04:15.897 sys 0m0.012s 00:04:15.897 20:03:02 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:15.897 20:03:02 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:15.897 ************************************ 00:04:15.897 END TEST env_memory 00:04:15.897 ************************************ 00:04:15.897 20:03:02 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:15.897 20:03:02 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:15.897 20:03:02 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:15.898 20:03:02 env -- common/autotest_common.sh@10 -- # set +x 00:04:15.898 ************************************ 00:04:15.898 START TEST env_vtophys 00:04:15.898 ************************************ 00:04:15.898 20:03:02 env.env_vtophys -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:15.898 EAL: lib.eal log level changed from notice to debug 00:04:15.898 EAL: Detected lcore 0 as core 0 on socket 0 00:04:15.898 EAL: Detected lcore 1 as core 1 on socket 0 00:04:15.898 EAL: Detected lcore 2 as core 2 on socket 0 00:04:15.898 EAL: Detected lcore 3 as core 3 on socket 0 00:04:15.898 EAL: Detected lcore 4 as core 4 on socket 0 00:04:15.898 EAL: Detected lcore 5 as core 5 on socket 0 00:04:15.898 EAL: Detected lcore 6 as core 8 on socket 0 00:04:15.898 EAL: Detected lcore 7 as core 9 on socket 0 00:04:15.898 EAL: Detected lcore 8 as core 10 on socket 0 00:04:15.898 EAL: Detected lcore 9 as core 11 on socket 0 00:04:15.898 EAL: Detected lcore 10 as core 12 on socket 0 00:04:15.898 EAL: Detected lcore 11 as core 16 on socket 0 00:04:15.898 EAL: Detected lcore 12 as core 17 on socket 0 00:04:15.898 EAL: Detected lcore 13 as core 18 on socket 0 00:04:15.898 EAL: Detected lcore 14 as core 19 on socket 0 00:04:15.898 EAL: Detected lcore 15 as core 20 on socket 0 00:04:15.898 EAL: Detected lcore 16 as core 21 on socket 0 00:04:15.898 EAL: Detected lcore 17 as core 24 on socket 0 00:04:15.898 EAL: Detected lcore 18 as core 25 on socket 0 00:04:15.898 EAL: Detected lcore 19 as core 26 on socket 0 00:04:15.898 EAL: Detected lcore 20 as core 27 on socket 0 00:04:15.898 EAL: Detected lcore 21 as core 28 on socket 0 00:04:15.898 EAL: Detected lcore 22 as core 0 on socket 1 00:04:15.898 EAL: Detected lcore 23 as core 1 on socket 1 00:04:15.898 EAL: Detected lcore 24 as core 2 on socket 1 00:04:15.898 EAL: Detected lcore 25 as core 3 on socket 1 00:04:15.898 EAL: Detected lcore 26 as core 4 on socket 1 00:04:15.898 EAL: Detected lcore 27 as core 5 on socket 1 00:04:15.898 EAL: Detected lcore 28 as core 8 on socket 1 00:04:15.898 EAL: Detected lcore 29 as core 9 on socket 1 00:04:15.898 EAL: Detected lcore 30 as core 10 on socket 1 00:04:15.898 EAL: Detected lcore 31 as core 11 on socket 1 00:04:15.898 EAL: Detected lcore 32 as core 12 on socket 1 00:04:15.898 EAL: Detected lcore 33 as core 16 on socket 1 00:04:15.898 EAL: Detected lcore 34 as core 17 on socket 1 00:04:15.898 EAL: Detected lcore 35 as core 18 on socket 1 00:04:15.898 EAL: Detected lcore 36 as core 19 on socket 1 00:04:15.898 EAL: Detected lcore 37 as core 20 on socket 1 00:04:15.898 EAL: Detected lcore 38 as core 21 on socket 1 00:04:15.898 EAL: Detected lcore 39 as core 24 on socket 1 00:04:15.898 EAL: Detected lcore 40 as core 25 on socket 1 00:04:15.898 EAL: Detected lcore 41 as core 26 on socket 1 00:04:15.898 EAL: Detected lcore 42 as core 27 on socket 1 00:04:15.898 EAL: Detected lcore 43 as core 28 on socket 1 00:04:15.898 EAL: Detected lcore 44 as core 0 on socket 0 00:04:15.898 EAL: Detected lcore 45 as core 1 on socket 0 00:04:15.898 EAL: Detected lcore 46 as core 2 on socket 0 00:04:15.898 EAL: Detected lcore 47 as core 3 on socket 0 00:04:15.898 EAL: Detected lcore 48 as core 4 on socket 0 00:04:15.898 EAL: Detected lcore 49 as core 5 on socket 0 00:04:15.898 EAL: Detected lcore 50 as core 8 on socket 0 00:04:15.898 EAL: Detected lcore 51 as core 9 on socket 0 00:04:15.898 EAL: Detected lcore 52 as core 10 on socket 0 00:04:15.898 EAL: Detected lcore 53 as core 11 on socket 0 00:04:15.898 EAL: Detected lcore 54 as core 12 on socket 0 00:04:15.898 EAL: Detected lcore 55 as core 16 on socket 0 00:04:15.898 EAL: Detected lcore 56 as core 17 on socket 0 00:04:15.898 EAL: Detected lcore 57 as core 18 on socket 0 00:04:15.898 EAL: Detected lcore 58 as core 19 on socket 0 00:04:15.898 EAL: Detected lcore 59 as core 20 on socket 0 00:04:15.898 EAL: Detected lcore 60 as core 21 on socket 0 00:04:15.898 EAL: Detected lcore 61 as core 24 on socket 0 00:04:15.898 EAL: Detected lcore 62 as core 25 on socket 0 00:04:15.898 EAL: Detected lcore 63 as core 26 on socket 0 00:04:15.898 EAL: Detected lcore 64 as core 27 on socket 0 00:04:15.898 EAL: Detected lcore 65 as core 28 on socket 0 00:04:15.898 EAL: Detected lcore 66 as core 0 on socket 1 00:04:15.898 EAL: Detected lcore 67 as core 1 on socket 1 00:04:15.898 EAL: Detected lcore 68 as core 2 on socket 1 00:04:15.898 EAL: Detected lcore 69 as core 3 on socket 1 00:04:15.898 EAL: Detected lcore 70 as core 4 on socket 1 00:04:15.898 EAL: Detected lcore 71 as core 5 on socket 1 00:04:15.898 EAL: Detected lcore 72 as core 8 on socket 1 00:04:15.898 EAL: Detected lcore 73 as core 9 on socket 1 00:04:15.898 EAL: Detected lcore 74 as core 10 on socket 1 00:04:15.898 EAL: Detected lcore 75 as core 11 on socket 1 00:04:15.898 EAL: Detected lcore 76 as core 12 on socket 1 00:04:15.898 EAL: Detected lcore 77 as core 16 on socket 1 00:04:15.898 EAL: Detected lcore 78 as core 17 on socket 1 00:04:15.898 EAL: Detected lcore 79 as core 18 on socket 1 00:04:15.898 EAL: Detected lcore 80 as core 19 on socket 1 00:04:15.898 EAL: Detected lcore 81 as core 20 on socket 1 00:04:15.898 EAL: Detected lcore 82 as core 21 on socket 1 00:04:15.898 EAL: Detected lcore 83 as core 24 on socket 1 00:04:15.898 EAL: Detected lcore 84 as core 25 on socket 1 00:04:15.898 EAL: Detected lcore 85 as core 26 on socket 1 00:04:15.898 EAL: Detected lcore 86 as core 27 on socket 1 00:04:15.898 EAL: Detected lcore 87 as core 28 on socket 1 00:04:15.898 EAL: Maximum logical cores by configuration: 128 00:04:15.898 EAL: Detected CPU lcores: 88 00:04:15.898 EAL: Detected NUMA nodes: 2 00:04:15.898 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:15.898 EAL: Checking presence of .so 'librte_eal.so.24' 00:04:15.898 EAL: Checking presence of .so 'librte_eal.so' 00:04:15.898 EAL: Detected static linkage of DPDK 00:04:15.898 EAL: No shared files mode enabled, IPC will be disabled 00:04:15.898 EAL: Bus pci wants IOVA as 'DC' 00:04:15.898 EAL: Buses did not request a specific IOVA mode. 00:04:15.898 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:15.898 EAL: Selected IOVA mode 'VA' 00:04:15.898 EAL: No free 2048 kB hugepages reported on node 1 00:04:15.898 EAL: Probing VFIO support... 00:04:15.898 EAL: IOMMU type 1 (Type 1) is supported 00:04:15.898 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:15.898 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:15.898 EAL: VFIO support initialized 00:04:15.898 EAL: Ask a virtual area of 0x2e000 bytes 00:04:15.898 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:15.898 EAL: Setting up physically contiguous memory... 00:04:15.898 EAL: Setting maximum number of open files to 524288 00:04:15.898 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:15.898 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:15.898 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:15.898 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.898 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:15.898 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:15.898 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.898 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:15.898 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:15.898 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.898 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:15.898 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:15.898 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.898 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:15.898 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:15.898 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.898 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:15.898 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:15.898 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.898 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:15.898 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:15.898 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.898 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:15.898 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:15.898 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.898 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:15.898 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:15.898 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:15.898 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.898 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:15.898 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:15.898 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.898 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:15.898 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:15.898 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.898 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:15.898 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:15.898 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.898 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:15.898 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:15.898 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.898 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:15.898 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:15.898 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.898 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:15.898 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:15.898 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.898 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:15.898 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:15.898 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.898 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:15.898 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:15.898 EAL: Hugepages will be freed exactly as allocated. 00:04:15.898 EAL: No shared files mode enabled, IPC is disabled 00:04:15.898 EAL: No shared files mode enabled, IPC is disabled 00:04:15.898 EAL: TSC frequency is ~2100000 KHz 00:04:15.898 EAL: Main lcore 0 is ready (tid=7f8b42dd3a00;cpuset=[0]) 00:04:15.898 EAL: Trying to obtain current memory policy. 00:04:15.898 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.898 EAL: Restoring previous memory policy: 0 00:04:15.898 EAL: request: mp_malloc_sync 00:04:15.898 EAL: No shared files mode enabled, IPC is disabled 00:04:15.898 EAL: Heap on socket 0 was expanded by 2MB 00:04:15.898 EAL: No shared files mode enabled, IPC is disabled 00:04:15.898 EAL: Mem event callback 'spdk:(nil)' registered 00:04:15.898 00:04:15.898 00:04:15.898 CUnit - A unit testing framework for C - Version 2.1-3 00:04:15.898 http://cunit.sourceforge.net/ 00:04:15.898 00:04:15.898 00:04:15.898 Suite: components_suite 00:04:15.898 Test: vtophys_malloc_test ...passed 00:04:15.898 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:15.898 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.898 EAL: Restoring previous memory policy: 4 00:04:15.899 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.899 EAL: request: mp_malloc_sync 00:04:15.899 EAL: No shared files mode enabled, IPC is disabled 00:04:15.899 EAL: Heap on socket 0 was expanded by 4MB 00:04:15.899 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.899 EAL: request: mp_malloc_sync 00:04:15.899 EAL: No shared files mode enabled, IPC is disabled 00:04:15.899 EAL: Heap on socket 0 was shrunk by 4MB 00:04:15.899 EAL: Trying to obtain current memory policy. 00:04:15.899 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.899 EAL: Restoring previous memory policy: 4 00:04:15.899 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.899 EAL: request: mp_malloc_sync 00:04:15.899 EAL: No shared files mode enabled, IPC is disabled 00:04:15.899 EAL: Heap on socket 0 was expanded by 6MB 00:04:15.899 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.899 EAL: request: mp_malloc_sync 00:04:15.899 EAL: No shared files mode enabled, IPC is disabled 00:04:15.899 EAL: Heap on socket 0 was shrunk by 6MB 00:04:15.899 EAL: Trying to obtain current memory policy. 00:04:15.899 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.899 EAL: Restoring previous memory policy: 4 00:04:15.899 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.899 EAL: request: mp_malloc_sync 00:04:15.899 EAL: No shared files mode enabled, IPC is disabled 00:04:15.899 EAL: Heap on socket 0 was expanded by 10MB 00:04:15.899 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.899 EAL: request: mp_malloc_sync 00:04:15.899 EAL: No shared files mode enabled, IPC is disabled 00:04:15.899 EAL: Heap on socket 0 was shrunk by 10MB 00:04:15.899 EAL: Trying to obtain current memory policy. 00:04:15.899 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.899 EAL: Restoring previous memory policy: 4 00:04:15.899 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.899 EAL: request: mp_malloc_sync 00:04:15.899 EAL: No shared files mode enabled, IPC is disabled 00:04:15.899 EAL: Heap on socket 0 was expanded by 18MB 00:04:15.899 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.899 EAL: request: mp_malloc_sync 00:04:15.899 EAL: No shared files mode enabled, IPC is disabled 00:04:15.899 EAL: Heap on socket 0 was shrunk by 18MB 00:04:15.899 EAL: Trying to obtain current memory policy. 00:04:15.899 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.899 EAL: Restoring previous memory policy: 4 00:04:15.899 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.899 EAL: request: mp_malloc_sync 00:04:15.899 EAL: No shared files mode enabled, IPC is disabled 00:04:15.899 EAL: Heap on socket 0 was expanded by 34MB 00:04:15.899 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.899 EAL: request: mp_malloc_sync 00:04:15.899 EAL: No shared files mode enabled, IPC is disabled 00:04:15.899 EAL: Heap on socket 0 was shrunk by 34MB 00:04:15.899 EAL: Trying to obtain current memory policy. 00:04:15.899 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.899 EAL: Restoring previous memory policy: 4 00:04:15.899 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.899 EAL: request: mp_malloc_sync 00:04:15.899 EAL: No shared files mode enabled, IPC is disabled 00:04:15.899 EAL: Heap on socket 0 was expanded by 66MB 00:04:15.899 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.899 EAL: request: mp_malloc_sync 00:04:15.899 EAL: No shared files mode enabled, IPC is disabled 00:04:15.899 EAL: Heap on socket 0 was shrunk by 66MB 00:04:15.899 EAL: Trying to obtain current memory policy. 00:04:15.899 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.158 EAL: Restoring previous memory policy: 4 00:04:16.158 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.158 EAL: request: mp_malloc_sync 00:04:16.158 EAL: No shared files mode enabled, IPC is disabled 00:04:16.158 EAL: Heap on socket 0 was expanded by 130MB 00:04:16.158 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.158 EAL: request: mp_malloc_sync 00:04:16.158 EAL: No shared files mode enabled, IPC is disabled 00:04:16.158 EAL: Heap on socket 0 was shrunk by 130MB 00:04:16.158 EAL: Trying to obtain current memory policy. 00:04:16.158 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.158 EAL: Restoring previous memory policy: 4 00:04:16.158 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.158 EAL: request: mp_malloc_sync 00:04:16.158 EAL: No shared files mode enabled, IPC is disabled 00:04:16.158 EAL: Heap on socket 0 was expanded by 258MB 00:04:16.158 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.158 EAL: request: mp_malloc_sync 00:04:16.158 EAL: No shared files mode enabled, IPC is disabled 00:04:16.158 EAL: Heap on socket 0 was shrunk by 258MB 00:04:16.158 EAL: Trying to obtain current memory policy. 00:04:16.158 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.416 EAL: Restoring previous memory policy: 4 00:04:16.416 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.416 EAL: request: mp_malloc_sync 00:04:16.416 EAL: No shared files mode enabled, IPC is disabled 00:04:16.416 EAL: Heap on socket 0 was expanded by 514MB 00:04:16.416 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.416 EAL: request: mp_malloc_sync 00:04:16.416 EAL: No shared files mode enabled, IPC is disabled 00:04:16.416 EAL: Heap on socket 0 was shrunk by 514MB 00:04:16.416 EAL: Trying to obtain current memory policy. 00:04:16.416 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.675 EAL: Restoring previous memory policy: 4 00:04:16.675 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.675 EAL: request: mp_malloc_sync 00:04:16.675 EAL: No shared files mode enabled, IPC is disabled 00:04:16.675 EAL: Heap on socket 0 was expanded by 1026MB 00:04:16.934 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.192 EAL: request: mp_malloc_sync 00:04:17.192 EAL: No shared files mode enabled, IPC is disabled 00:04:17.192 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:17.192 passed 00:04:17.192 00:04:17.192 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.192 suites 1 1 n/a 0 0 00:04:17.192 tests 2 2 2 0 0 00:04:17.192 asserts 497 497 497 0 n/a 00:04:17.192 00:04:17.192 Elapsed time = 1.094 seconds 00:04:17.192 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.192 EAL: request: mp_malloc_sync 00:04:17.192 EAL: No shared files mode enabled, IPC is disabled 00:04:17.192 EAL: Heap on socket 0 was shrunk by 2MB 00:04:17.192 EAL: No shared files mode enabled, IPC is disabled 00:04:17.192 EAL: No shared files mode enabled, IPC is disabled 00:04:17.192 EAL: No shared files mode enabled, IPC is disabled 00:04:17.192 00:04:17.192 real 0m1.187s 00:04:17.192 user 0m0.701s 00:04:17.192 sys 0m0.458s 00:04:17.192 20:03:04 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:17.192 20:03:04 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:17.192 ************************************ 00:04:17.192 END TEST env_vtophys 00:04:17.192 ************************************ 00:04:17.192 20:03:04 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:04:17.192 20:03:04 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:17.192 20:03:04 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:17.192 20:03:04 env -- common/autotest_common.sh@10 -- # set +x 00:04:17.192 ************************************ 00:04:17.192 START TEST env_pci 00:04:17.192 ************************************ 00:04:17.192 20:03:04 env.env_pci -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:04:17.192 00:04:17.192 00:04:17.192 CUnit - A unit testing framework for C - Version 2.1-3 00:04:17.192 http://cunit.sourceforge.net/ 00:04:17.192 00:04:17.192 00:04:17.192 Suite: pci 00:04:17.193 Test: pci_hook ...[2024-05-16 20:03:04.188149] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/pci.c:1041:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1643222 has claimed it 00:04:17.193 EAL: Cannot find device (10000:00:01.0) 00:04:17.193 EAL: Failed to attach device on primary process 00:04:17.193 passed 00:04:17.193 00:04:17.193 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.193 suites 1 1 n/a 0 0 00:04:17.193 tests 1 1 1 0 0 00:04:17.193 asserts 25 25 25 0 n/a 00:04:17.193 00:04:17.193 Elapsed time = 0.029 seconds 00:04:17.193 00:04:17.193 real 0m0.044s 00:04:17.193 user 0m0.012s 00:04:17.193 sys 0m0.032s 00:04:17.193 20:03:04 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:17.193 20:03:04 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:17.193 ************************************ 00:04:17.193 END TEST env_pci 00:04:17.193 ************************************ 00:04:17.193 20:03:04 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:17.193 20:03:04 env -- env/env.sh@15 -- # uname 00:04:17.193 20:03:04 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:17.193 20:03:04 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:17.193 20:03:04 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:17.193 20:03:04 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:04:17.193 20:03:04 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:17.193 20:03:04 env -- common/autotest_common.sh@10 -- # set +x 00:04:17.193 ************************************ 00:04:17.193 START TEST env_dpdk_post_init 00:04:17.193 ************************************ 00:04:17.193 20:03:04 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:17.193 EAL: Detected CPU lcores: 88 00:04:17.193 EAL: Detected NUMA nodes: 2 00:04:17.193 EAL: Detected static linkage of DPDK 00:04:17.193 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:17.193 EAL: Selected IOVA mode 'VA' 00:04:17.193 EAL: No free 2048 kB hugepages reported on node 1 00:04:17.193 EAL: VFIO support initialized 00:04:17.193 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:17.451 EAL: Using IOMMU type 1 (Type 1) 00:04:18.018 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:04:18.954 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5f:00.0 (socket 0) 00:04:19.521 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:04:22.802 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:04:22.802 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001008000 00:04:22.802 EAL: Releasing PCI mapped resource for 0000:5f:00.0 00:04:22.802 EAL: Calling pci_unmap_resource for 0000:5f:00.0 at 0x202001004000 00:04:23.369 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:23.369 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001000000 00:04:23.628 Starting DPDK initialization... 00:04:23.628 Starting SPDK post initialization... 00:04:23.628 SPDK NVMe probe 00:04:23.628 Attaching to 0000:5e:00.0 00:04:23.628 Attaching to 0000:5f:00.0 00:04:23.628 Attaching to 0000:d8:00.0 00:04:23.628 Attached to 0000:d8:00.0 00:04:23.628 Attached to 0000:5e:00.0 00:04:23.628 Attached to 0000:5f:00.0 00:04:23.628 Cleaning up... 00:04:23.628 00:04:23.628 real 0m6.293s 00:04:23.628 user 0m3.957s 00:04:23.628 sys 0m0.118s 00:04:23.628 20:03:10 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:23.628 20:03:10 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:23.628 ************************************ 00:04:23.628 END TEST env_dpdk_post_init 00:04:23.628 ************************************ 00:04:23.628 20:03:10 env -- env/env.sh@26 -- # uname 00:04:23.629 20:03:10 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:23.629 20:03:10 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:23.629 20:03:10 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:23.629 20:03:10 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:23.629 20:03:10 env -- common/autotest_common.sh@10 -- # set +x 00:04:23.629 ************************************ 00:04:23.629 START TEST env_mem_callbacks 00:04:23.629 ************************************ 00:04:23.629 20:03:10 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:23.629 EAL: Detected CPU lcores: 88 00:04:23.629 EAL: Detected NUMA nodes: 2 00:04:23.629 EAL: Detected static linkage of DPDK 00:04:23.629 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:23.629 EAL: Selected IOVA mode 'VA' 00:04:23.629 EAL: No free 2048 kB hugepages reported on node 1 00:04:23.629 EAL: VFIO support initialized 00:04:23.629 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:23.629 00:04:23.629 00:04:23.629 CUnit - A unit testing framework for C - Version 2.1-3 00:04:23.629 http://cunit.sourceforge.net/ 00:04:23.629 00:04:23.629 00:04:23.629 Suite: memory 00:04:23.629 Test: test ... 00:04:23.629 register 0x200000200000 2097152 00:04:23.629 malloc 3145728 00:04:23.629 register 0x200000400000 4194304 00:04:23.629 buf 0x200000500000 len 3145728 PASSED 00:04:23.629 malloc 64 00:04:23.629 buf 0x2000004fff40 len 64 PASSED 00:04:23.629 malloc 4194304 00:04:23.629 register 0x200000800000 6291456 00:04:23.629 buf 0x200000a00000 len 4194304 PASSED 00:04:23.629 free 0x200000500000 3145728 00:04:23.629 free 0x2000004fff40 64 00:04:23.629 unregister 0x200000400000 4194304 PASSED 00:04:23.629 free 0x200000a00000 4194304 00:04:23.629 unregister 0x200000800000 6291456 PASSED 00:04:23.629 malloc 8388608 00:04:23.629 register 0x200000400000 10485760 00:04:23.629 buf 0x200000600000 len 8388608 PASSED 00:04:23.629 free 0x200000600000 8388608 00:04:23.629 unregister 0x200000400000 10485760 PASSED 00:04:23.629 passed 00:04:23.629 00:04:23.629 Run Summary: Type Total Ran Passed Failed Inactive 00:04:23.629 suites 1 1 n/a 0 0 00:04:23.629 tests 1 1 1 0 0 00:04:23.629 asserts 15 15 15 0 n/a 00:04:23.629 00:04:23.629 Elapsed time = 0.006 seconds 00:04:23.629 00:04:23.629 real 0m0.057s 00:04:23.629 user 0m0.021s 00:04:23.629 sys 0m0.035s 00:04:23.629 20:03:10 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:23.629 20:03:10 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:23.629 ************************************ 00:04:23.629 END TEST env_mem_callbacks 00:04:23.629 ************************************ 00:04:23.629 00:04:23.629 real 0m8.119s 00:04:23.629 user 0m4.935s 00:04:23.629 sys 0m0.951s 00:04:23.629 20:03:10 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:23.629 20:03:10 env -- common/autotest_common.sh@10 -- # set +x 00:04:23.629 ************************************ 00:04:23.629 END TEST env 00:04:23.629 ************************************ 00:04:23.889 20:03:10 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:04:23.889 20:03:10 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:23.889 20:03:10 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:23.889 20:03:10 -- common/autotest_common.sh@10 -- # set +x 00:04:23.889 ************************************ 00:04:23.889 START TEST rpc 00:04:23.889 ************************************ 00:04:23.889 20:03:10 rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:04:23.889 * Looking for test storage... 00:04:23.889 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:04:23.889 20:03:10 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1644823 00:04:23.889 20:03:10 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:23.889 20:03:10 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:23.889 20:03:10 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1644823 00:04:23.889 20:03:10 rpc -- common/autotest_common.sh@827 -- # '[' -z 1644823 ']' 00:04:23.889 20:03:10 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:23.889 20:03:10 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:23.889 20:03:10 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:23.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:23.889 20:03:10 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:23.889 20:03:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.889 [2024-05-16 20:03:10.939013] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:04:23.889 [2024-05-16 20:03:10.939074] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1644823 ] 00:04:23.889 EAL: No free 2048 kB hugepages reported on node 1 00:04:23.889 [2024-05-16 20:03:10.993140] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.148 [2024-05-16 20:03:11.076826] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:24.148 [2024-05-16 20:03:11.076862] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1644823' to capture a snapshot of events at runtime. 00:04:24.148 [2024-05-16 20:03:11.076868] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:24.148 [2024-05-16 20:03:11.076874] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:24.148 [2024-05-16 20:03:11.076879] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1644823 for offline analysis/debug. 00:04:24.148 [2024-05-16 20:03:11.076898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.148 20:03:11 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:24.148 20:03:11 rpc -- common/autotest_common.sh@860 -- # return 0 00:04:24.148 20:03:11 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:04:24.149 20:03:11 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:04:24.149 20:03:11 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:24.149 20:03:11 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:24.149 20:03:11 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:24.149 20:03:11 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:24.149 20:03:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.408 ************************************ 00:04:24.408 START TEST rpc_integrity 00:04:24.408 ************************************ 00:04:24.408 20:03:11 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:04:24.408 20:03:11 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:24.408 20:03:11 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:24.408 20:03:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.408 20:03:11 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:24.408 20:03:11 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:24.408 20:03:11 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:24.408 20:03:11 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:24.408 20:03:11 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:24.408 20:03:11 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:24.408 20:03:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.408 20:03:11 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:24.408 20:03:11 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:24.408 20:03:11 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:24.408 20:03:11 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:24.408 20:03:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.408 20:03:11 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:24.408 20:03:11 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:24.408 { 00:04:24.408 "name": "Malloc0", 00:04:24.408 "aliases": [ 00:04:24.408 "e52a0b5d-485d-4c1d-b408-f14105a81011" 00:04:24.408 ], 00:04:24.408 "product_name": "Malloc disk", 00:04:24.408 "block_size": 512, 00:04:24.408 "num_blocks": 16384, 00:04:24.408 "uuid": "e52a0b5d-485d-4c1d-b408-f14105a81011", 00:04:24.408 "assigned_rate_limits": { 00:04:24.408 "rw_ios_per_sec": 0, 00:04:24.408 "rw_mbytes_per_sec": 0, 00:04:24.408 "r_mbytes_per_sec": 0, 00:04:24.408 "w_mbytes_per_sec": 0 00:04:24.408 }, 00:04:24.408 "claimed": false, 00:04:24.408 "zoned": false, 00:04:24.408 "supported_io_types": { 00:04:24.408 "read": true, 00:04:24.408 "write": true, 00:04:24.408 "unmap": true, 00:04:24.408 "write_zeroes": true, 00:04:24.408 "flush": true, 00:04:24.408 "reset": true, 00:04:24.408 "compare": false, 00:04:24.408 "compare_and_write": false, 00:04:24.408 "abort": true, 00:04:24.408 "nvme_admin": false, 00:04:24.408 "nvme_io": false 00:04:24.409 }, 00:04:24.409 "memory_domains": [ 00:04:24.409 { 00:04:24.409 "dma_device_id": "system", 00:04:24.409 "dma_device_type": 1 00:04:24.409 }, 00:04:24.409 { 00:04:24.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:24.409 "dma_device_type": 2 00:04:24.409 } 00:04:24.409 ], 00:04:24.409 "driver_specific": {} 00:04:24.409 } 00:04:24.409 ]' 00:04:24.409 20:03:11 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:24.409 20:03:11 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:24.409 20:03:11 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:24.409 20:03:11 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:24.409 20:03:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.409 [2024-05-16 20:03:11.431060] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:24.409 [2024-05-16 20:03:11.431091] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:24.409 [2024-05-16 20:03:11.431104] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x4e44240 00:04:24.409 [2024-05-16 20:03:11.431111] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:24.409 [2024-05-16 20:03:11.431970] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:24.409 [2024-05-16 20:03:11.431991] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:24.409 Passthru0 00:04:24.409 20:03:11 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:24.409 20:03:11 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:24.409 20:03:11 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:24.409 20:03:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.409 20:03:11 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:24.409 20:03:11 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:24.409 { 00:04:24.409 "name": "Malloc0", 00:04:24.409 "aliases": [ 00:04:24.409 "e52a0b5d-485d-4c1d-b408-f14105a81011" 00:04:24.409 ], 00:04:24.409 "product_name": "Malloc disk", 00:04:24.409 "block_size": 512, 00:04:24.409 "num_blocks": 16384, 00:04:24.409 "uuid": "e52a0b5d-485d-4c1d-b408-f14105a81011", 00:04:24.409 "assigned_rate_limits": { 00:04:24.409 "rw_ios_per_sec": 0, 00:04:24.409 "rw_mbytes_per_sec": 0, 00:04:24.409 "r_mbytes_per_sec": 0, 00:04:24.409 "w_mbytes_per_sec": 0 00:04:24.409 }, 00:04:24.409 "claimed": true, 00:04:24.409 "claim_type": "exclusive_write", 00:04:24.409 "zoned": false, 00:04:24.409 "supported_io_types": { 00:04:24.409 "read": true, 00:04:24.409 "write": true, 00:04:24.409 "unmap": true, 00:04:24.409 "write_zeroes": true, 00:04:24.409 "flush": true, 00:04:24.409 "reset": true, 00:04:24.409 "compare": false, 00:04:24.409 "compare_and_write": false, 00:04:24.409 "abort": true, 00:04:24.409 "nvme_admin": false, 00:04:24.409 "nvme_io": false 00:04:24.409 }, 00:04:24.409 "memory_domains": [ 00:04:24.409 { 00:04:24.409 "dma_device_id": "system", 00:04:24.409 "dma_device_type": 1 00:04:24.409 }, 00:04:24.409 { 00:04:24.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:24.409 "dma_device_type": 2 00:04:24.409 } 00:04:24.409 ], 00:04:24.409 "driver_specific": {} 00:04:24.409 }, 00:04:24.409 { 00:04:24.409 "name": "Passthru0", 00:04:24.409 "aliases": [ 00:04:24.409 "ba3f81ab-81cc-56dc-8901-8d8f89e12b93" 00:04:24.409 ], 00:04:24.409 "product_name": "passthru", 00:04:24.409 "block_size": 512, 00:04:24.409 "num_blocks": 16384, 00:04:24.409 "uuid": "ba3f81ab-81cc-56dc-8901-8d8f89e12b93", 00:04:24.409 "assigned_rate_limits": { 00:04:24.409 "rw_ios_per_sec": 0, 00:04:24.409 "rw_mbytes_per_sec": 0, 00:04:24.409 "r_mbytes_per_sec": 0, 00:04:24.409 "w_mbytes_per_sec": 0 00:04:24.409 }, 00:04:24.409 "claimed": false, 00:04:24.409 "zoned": false, 00:04:24.409 "supported_io_types": { 00:04:24.409 "read": true, 00:04:24.409 "write": true, 00:04:24.409 "unmap": true, 00:04:24.409 "write_zeroes": true, 00:04:24.409 "flush": true, 00:04:24.409 "reset": true, 00:04:24.409 "compare": false, 00:04:24.409 "compare_and_write": false, 00:04:24.409 "abort": true, 00:04:24.409 "nvme_admin": false, 00:04:24.409 "nvme_io": false 00:04:24.409 }, 00:04:24.409 "memory_domains": [ 00:04:24.409 { 00:04:24.409 "dma_device_id": "system", 00:04:24.409 "dma_device_type": 1 00:04:24.409 }, 00:04:24.409 { 00:04:24.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:24.409 "dma_device_type": 2 00:04:24.409 } 00:04:24.409 ], 00:04:24.409 "driver_specific": { 00:04:24.409 "passthru": { 00:04:24.409 "name": "Passthru0", 00:04:24.409 "base_bdev_name": "Malloc0" 00:04:24.409 } 00:04:24.409 } 00:04:24.409 } 00:04:24.409 ]' 00:04:24.409 20:03:11 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:24.409 20:03:11 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:24.409 20:03:11 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:24.409 20:03:11 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:24.409 20:03:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.409 20:03:11 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:24.409 20:03:11 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:24.409 20:03:11 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:24.409 20:03:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.409 20:03:11 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:24.409 20:03:11 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:24.409 20:03:11 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:24.409 20:03:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.409 20:03:11 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:24.409 20:03:11 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:24.409 20:03:11 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:24.668 20:03:11 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:24.668 00:04:24.668 real 0m0.241s 00:04:24.668 user 0m0.160s 00:04:24.668 sys 0m0.036s 00:04:24.668 20:03:11 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:24.668 20:03:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.668 ************************************ 00:04:24.668 END TEST rpc_integrity 00:04:24.668 ************************************ 00:04:24.668 20:03:11 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:24.668 20:03:11 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:24.668 20:03:11 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:24.668 20:03:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.668 ************************************ 00:04:24.668 START TEST rpc_plugins 00:04:24.668 ************************************ 00:04:24.668 20:03:11 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:04:24.668 20:03:11 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:24.668 20:03:11 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:24.668 20:03:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:24.668 20:03:11 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:24.668 20:03:11 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:24.668 20:03:11 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:24.668 20:03:11 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:24.668 20:03:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:24.668 20:03:11 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:24.668 20:03:11 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:24.668 { 00:04:24.668 "name": "Malloc1", 00:04:24.668 "aliases": [ 00:04:24.668 "86bd4e9e-298d-4191-b972-6a994b3607cb" 00:04:24.668 ], 00:04:24.668 "product_name": "Malloc disk", 00:04:24.668 "block_size": 4096, 00:04:24.668 "num_blocks": 256, 00:04:24.668 "uuid": "86bd4e9e-298d-4191-b972-6a994b3607cb", 00:04:24.668 "assigned_rate_limits": { 00:04:24.668 "rw_ios_per_sec": 0, 00:04:24.668 "rw_mbytes_per_sec": 0, 00:04:24.668 "r_mbytes_per_sec": 0, 00:04:24.668 "w_mbytes_per_sec": 0 00:04:24.668 }, 00:04:24.668 "claimed": false, 00:04:24.668 "zoned": false, 00:04:24.668 "supported_io_types": { 00:04:24.668 "read": true, 00:04:24.668 "write": true, 00:04:24.668 "unmap": true, 00:04:24.668 "write_zeroes": true, 00:04:24.668 "flush": true, 00:04:24.668 "reset": true, 00:04:24.668 "compare": false, 00:04:24.668 "compare_and_write": false, 00:04:24.668 "abort": true, 00:04:24.668 "nvme_admin": false, 00:04:24.668 "nvme_io": false 00:04:24.668 }, 00:04:24.668 "memory_domains": [ 00:04:24.668 { 00:04:24.668 "dma_device_id": "system", 00:04:24.668 "dma_device_type": 1 00:04:24.668 }, 00:04:24.668 { 00:04:24.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:24.668 "dma_device_type": 2 00:04:24.668 } 00:04:24.668 ], 00:04:24.668 "driver_specific": {} 00:04:24.668 } 00:04:24.668 ]' 00:04:24.668 20:03:11 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:24.668 20:03:11 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:24.668 20:03:11 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:24.668 20:03:11 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:24.668 20:03:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:24.668 20:03:11 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:24.668 20:03:11 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:24.668 20:03:11 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:24.668 20:03:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:24.668 20:03:11 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:24.668 20:03:11 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:24.668 20:03:11 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:24.668 20:03:11 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:24.668 00:04:24.668 real 0m0.130s 00:04:24.668 user 0m0.090s 00:04:24.668 sys 0m0.016s 00:04:24.668 20:03:11 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:24.668 20:03:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:24.668 ************************************ 00:04:24.668 END TEST rpc_plugins 00:04:24.668 ************************************ 00:04:24.668 20:03:11 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:24.668 20:03:11 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:24.668 20:03:11 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:24.668 20:03:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.927 ************************************ 00:04:24.927 START TEST rpc_trace_cmd_test 00:04:24.927 ************************************ 00:04:24.927 20:03:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:04:24.927 20:03:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:24.927 20:03:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:24.927 20:03:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:24.927 20:03:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:24.927 20:03:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:24.927 20:03:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:24.927 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1644823", 00:04:24.927 "tpoint_group_mask": "0x8", 00:04:24.927 "iscsi_conn": { 00:04:24.927 "mask": "0x2", 00:04:24.927 "tpoint_mask": "0x0" 00:04:24.927 }, 00:04:24.927 "scsi": { 00:04:24.927 "mask": "0x4", 00:04:24.927 "tpoint_mask": "0x0" 00:04:24.927 }, 00:04:24.927 "bdev": { 00:04:24.927 "mask": "0x8", 00:04:24.927 "tpoint_mask": "0xffffffffffffffff" 00:04:24.927 }, 00:04:24.927 "nvmf_rdma": { 00:04:24.927 "mask": "0x10", 00:04:24.927 "tpoint_mask": "0x0" 00:04:24.927 }, 00:04:24.927 "nvmf_tcp": { 00:04:24.927 "mask": "0x20", 00:04:24.927 "tpoint_mask": "0x0" 00:04:24.927 }, 00:04:24.927 "ftl": { 00:04:24.927 "mask": "0x40", 00:04:24.927 "tpoint_mask": "0x0" 00:04:24.927 }, 00:04:24.927 "blobfs": { 00:04:24.927 "mask": "0x80", 00:04:24.927 "tpoint_mask": "0x0" 00:04:24.927 }, 00:04:24.927 "dsa": { 00:04:24.927 "mask": "0x200", 00:04:24.927 "tpoint_mask": "0x0" 00:04:24.927 }, 00:04:24.927 "thread": { 00:04:24.927 "mask": "0x400", 00:04:24.927 "tpoint_mask": "0x0" 00:04:24.927 }, 00:04:24.927 "nvme_pcie": { 00:04:24.927 "mask": "0x800", 00:04:24.927 "tpoint_mask": "0x0" 00:04:24.927 }, 00:04:24.927 "iaa": { 00:04:24.927 "mask": "0x1000", 00:04:24.927 "tpoint_mask": "0x0" 00:04:24.927 }, 00:04:24.927 "nvme_tcp": { 00:04:24.927 "mask": "0x2000", 00:04:24.927 "tpoint_mask": "0x0" 00:04:24.927 }, 00:04:24.927 "bdev_nvme": { 00:04:24.927 "mask": "0x4000", 00:04:24.927 "tpoint_mask": "0x0" 00:04:24.927 }, 00:04:24.927 "sock": { 00:04:24.927 "mask": "0x8000", 00:04:24.927 "tpoint_mask": "0x0" 00:04:24.927 } 00:04:24.927 }' 00:04:24.927 20:03:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:24.927 20:03:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:24.927 20:03:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:24.927 20:03:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:24.927 20:03:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:24.927 20:03:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:24.927 20:03:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:24.927 20:03:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:24.927 20:03:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:24.927 20:03:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:24.927 00:04:24.927 real 0m0.193s 00:04:24.927 user 0m0.167s 00:04:24.927 sys 0m0.016s 00:04:24.927 20:03:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:24.927 20:03:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:24.927 ************************************ 00:04:24.927 END TEST rpc_trace_cmd_test 00:04:24.927 ************************************ 00:04:24.927 20:03:12 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:24.927 20:03:12 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:24.927 20:03:12 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:24.927 20:03:12 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:24.927 20:03:12 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:24.927 20:03:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.186 ************************************ 00:04:25.187 START TEST rpc_daemon_integrity 00:04:25.187 ************************************ 00:04:25.187 20:03:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:04:25.187 20:03:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:25.187 20:03:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.187 20:03:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.187 20:03:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.187 20:03:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:25.187 20:03:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:25.187 20:03:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:25.187 20:03:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:25.187 20:03:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.187 20:03:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.187 20:03:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.187 20:03:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:25.187 20:03:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:25.187 20:03:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.187 20:03:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.187 20:03:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.187 20:03:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:25.187 { 00:04:25.187 "name": "Malloc2", 00:04:25.187 "aliases": [ 00:04:25.187 "d0c9119f-e008-4f10-8c6f-a3ff0623ee34" 00:04:25.187 ], 00:04:25.187 "product_name": "Malloc disk", 00:04:25.187 "block_size": 512, 00:04:25.187 "num_blocks": 16384, 00:04:25.187 "uuid": "d0c9119f-e008-4f10-8c6f-a3ff0623ee34", 00:04:25.187 "assigned_rate_limits": { 00:04:25.187 "rw_ios_per_sec": 0, 00:04:25.187 "rw_mbytes_per_sec": 0, 00:04:25.187 "r_mbytes_per_sec": 0, 00:04:25.187 "w_mbytes_per_sec": 0 00:04:25.187 }, 00:04:25.187 "claimed": false, 00:04:25.187 "zoned": false, 00:04:25.187 "supported_io_types": { 00:04:25.187 "read": true, 00:04:25.187 "write": true, 00:04:25.187 "unmap": true, 00:04:25.187 "write_zeroes": true, 00:04:25.187 "flush": true, 00:04:25.187 "reset": true, 00:04:25.187 "compare": false, 00:04:25.187 "compare_and_write": false, 00:04:25.187 "abort": true, 00:04:25.187 "nvme_admin": false, 00:04:25.187 "nvme_io": false 00:04:25.187 }, 00:04:25.187 "memory_domains": [ 00:04:25.187 { 00:04:25.187 "dma_device_id": "system", 00:04:25.187 "dma_device_type": 1 00:04:25.187 }, 00:04:25.187 { 00:04:25.187 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:25.187 "dma_device_type": 2 00:04:25.187 } 00:04:25.187 ], 00:04:25.187 "driver_specific": {} 00:04:25.187 } 00:04:25.187 ]' 00:04:25.187 20:03:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:25.187 20:03:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:25.187 20:03:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:25.187 20:03:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.187 20:03:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.187 [2024-05-16 20:03:12.193050] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:25.187 [2024-05-16 20:03:12.193078] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:25.187 [2024-05-16 20:03:12.193091] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x4fd7140 00:04:25.187 [2024-05-16 20:03:12.193097] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:25.187 [2024-05-16 20:03:12.193836] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:25.187 [2024-05-16 20:03:12.193856] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:25.187 Passthru0 00:04:25.187 20:03:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.187 20:03:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:25.187 20:03:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.187 20:03:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.187 20:03:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.187 20:03:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:25.187 { 00:04:25.187 "name": "Malloc2", 00:04:25.187 "aliases": [ 00:04:25.187 "d0c9119f-e008-4f10-8c6f-a3ff0623ee34" 00:04:25.187 ], 00:04:25.187 "product_name": "Malloc disk", 00:04:25.187 "block_size": 512, 00:04:25.187 "num_blocks": 16384, 00:04:25.187 "uuid": "d0c9119f-e008-4f10-8c6f-a3ff0623ee34", 00:04:25.187 "assigned_rate_limits": { 00:04:25.187 "rw_ios_per_sec": 0, 00:04:25.187 "rw_mbytes_per_sec": 0, 00:04:25.187 "r_mbytes_per_sec": 0, 00:04:25.187 "w_mbytes_per_sec": 0 00:04:25.187 }, 00:04:25.187 "claimed": true, 00:04:25.187 "claim_type": "exclusive_write", 00:04:25.187 "zoned": false, 00:04:25.187 "supported_io_types": { 00:04:25.187 "read": true, 00:04:25.187 "write": true, 00:04:25.187 "unmap": true, 00:04:25.187 "write_zeroes": true, 00:04:25.187 "flush": true, 00:04:25.187 "reset": true, 00:04:25.187 "compare": false, 00:04:25.187 "compare_and_write": false, 00:04:25.187 "abort": true, 00:04:25.187 "nvme_admin": false, 00:04:25.187 "nvme_io": false 00:04:25.187 }, 00:04:25.187 "memory_domains": [ 00:04:25.187 { 00:04:25.187 "dma_device_id": "system", 00:04:25.187 "dma_device_type": 1 00:04:25.187 }, 00:04:25.187 { 00:04:25.187 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:25.187 "dma_device_type": 2 00:04:25.187 } 00:04:25.187 ], 00:04:25.187 "driver_specific": {} 00:04:25.187 }, 00:04:25.187 { 00:04:25.187 "name": "Passthru0", 00:04:25.187 "aliases": [ 00:04:25.187 "fda5f701-a6aa-5dc8-9a71-d8fcf19f3ebf" 00:04:25.187 ], 00:04:25.187 "product_name": "passthru", 00:04:25.187 "block_size": 512, 00:04:25.187 "num_blocks": 16384, 00:04:25.187 "uuid": "fda5f701-a6aa-5dc8-9a71-d8fcf19f3ebf", 00:04:25.187 "assigned_rate_limits": { 00:04:25.187 "rw_ios_per_sec": 0, 00:04:25.187 "rw_mbytes_per_sec": 0, 00:04:25.187 "r_mbytes_per_sec": 0, 00:04:25.187 "w_mbytes_per_sec": 0 00:04:25.187 }, 00:04:25.187 "claimed": false, 00:04:25.187 "zoned": false, 00:04:25.187 "supported_io_types": { 00:04:25.187 "read": true, 00:04:25.187 "write": true, 00:04:25.187 "unmap": true, 00:04:25.187 "write_zeroes": true, 00:04:25.187 "flush": true, 00:04:25.187 "reset": true, 00:04:25.187 "compare": false, 00:04:25.187 "compare_and_write": false, 00:04:25.187 "abort": true, 00:04:25.187 "nvme_admin": false, 00:04:25.187 "nvme_io": false 00:04:25.187 }, 00:04:25.187 "memory_domains": [ 00:04:25.187 { 00:04:25.187 "dma_device_id": "system", 00:04:25.187 "dma_device_type": 1 00:04:25.187 }, 00:04:25.187 { 00:04:25.187 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:25.187 "dma_device_type": 2 00:04:25.187 } 00:04:25.187 ], 00:04:25.187 "driver_specific": { 00:04:25.187 "passthru": { 00:04:25.187 "name": "Passthru0", 00:04:25.187 "base_bdev_name": "Malloc2" 00:04:25.187 } 00:04:25.187 } 00:04:25.187 } 00:04:25.187 ]' 00:04:25.187 20:03:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:25.187 20:03:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:25.187 20:03:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:25.187 20:03:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.187 20:03:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.187 20:03:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.187 20:03:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:25.187 20:03:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.187 20:03:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.187 20:03:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.187 20:03:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:25.187 20:03:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.187 20:03:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.188 20:03:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.188 20:03:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:25.188 20:03:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:25.188 20:03:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:25.188 00:04:25.188 real 0m0.243s 00:04:25.188 user 0m0.161s 00:04:25.188 sys 0m0.031s 00:04:25.188 20:03:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:25.188 20:03:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.188 ************************************ 00:04:25.188 END TEST rpc_daemon_integrity 00:04:25.188 ************************************ 00:04:25.447 20:03:12 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:25.447 20:03:12 rpc -- rpc/rpc.sh@84 -- # killprocess 1644823 00:04:25.447 20:03:12 rpc -- common/autotest_common.sh@946 -- # '[' -z 1644823 ']' 00:04:25.447 20:03:12 rpc -- common/autotest_common.sh@950 -- # kill -0 1644823 00:04:25.447 20:03:12 rpc -- common/autotest_common.sh@951 -- # uname 00:04:25.447 20:03:12 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:25.447 20:03:12 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1644823 00:04:25.447 20:03:12 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:25.447 20:03:12 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:25.447 20:03:12 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1644823' 00:04:25.447 killing process with pid 1644823 00:04:25.447 20:03:12 rpc -- common/autotest_common.sh@965 -- # kill 1644823 00:04:25.447 20:03:12 rpc -- common/autotest_common.sh@970 -- # wait 1644823 00:04:25.706 00:04:25.706 real 0m1.890s 00:04:25.706 user 0m2.428s 00:04:25.706 sys 0m0.622s 00:04:25.706 20:03:12 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:25.706 20:03:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.706 ************************************ 00:04:25.706 END TEST rpc 00:04:25.706 ************************************ 00:04:25.706 20:03:12 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:25.706 20:03:12 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:25.706 20:03:12 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:25.706 20:03:12 -- common/autotest_common.sh@10 -- # set +x 00:04:25.706 ************************************ 00:04:25.706 START TEST skip_rpc 00:04:25.706 ************************************ 00:04:25.706 20:03:12 skip_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:25.706 * Looking for test storage... 00:04:25.965 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:04:25.965 20:03:12 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:04:25.965 20:03:12 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:04:25.965 20:03:12 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:25.966 20:03:12 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:25.966 20:03:12 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:25.966 20:03:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.966 ************************************ 00:04:25.966 START TEST skip_rpc 00:04:25.966 ************************************ 00:04:25.966 20:03:12 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:04:25.966 20:03:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1645290 00:04:25.966 20:03:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:25.966 20:03:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:25.966 20:03:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:25.966 [2024-05-16 20:03:12.921222] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:04:25.966 [2024-05-16 20:03:12.921289] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1645290 ] 00:04:25.966 EAL: No free 2048 kB hugepages reported on node 1 00:04:25.966 [2024-05-16 20:03:12.979048] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.966 [2024-05-16 20:03:13.055740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.239 20:03:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:31.239 20:03:17 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:31.239 20:03:17 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:31.239 20:03:17 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:31.239 20:03:17 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:31.239 20:03:17 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:31.239 20:03:17 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:31.239 20:03:17 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:31.239 20:03:17 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:31.239 20:03:17 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.239 20:03:17 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:31.239 20:03:17 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:31.239 20:03:17 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:31.239 20:03:17 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:31.239 20:03:17 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:31.239 20:03:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:31.239 20:03:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1645290 00:04:31.239 20:03:17 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 1645290 ']' 00:04:31.239 20:03:17 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 1645290 00:04:31.239 20:03:17 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:04:31.239 20:03:17 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:31.239 20:03:17 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1645290 00:04:31.239 20:03:17 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:31.239 20:03:17 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:31.239 20:03:17 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1645290' 00:04:31.239 killing process with pid 1645290 00:04:31.239 20:03:17 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 1645290 00:04:31.239 20:03:17 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 1645290 00:04:31.239 00:04:31.239 real 0m5.381s 00:04:31.239 user 0m5.128s 00:04:31.239 sys 0m0.285s 00:04:31.239 20:03:18 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:31.239 20:03:18 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.239 ************************************ 00:04:31.239 END TEST skip_rpc 00:04:31.239 ************************************ 00:04:31.239 20:03:18 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:31.239 20:03:18 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:31.239 20:03:18 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:31.239 20:03:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.239 ************************************ 00:04:31.239 START TEST skip_rpc_with_json 00:04:31.239 ************************************ 00:04:31.239 20:03:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:04:31.239 20:03:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:31.239 20:03:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1646151 00:04:31.239 20:03:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:31.239 20:03:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1646151 00:04:31.239 20:03:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 1646151 ']' 00:04:31.239 20:03:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:31.239 20:03:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:31.239 20:03:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:31.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:31.239 20:03:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:31.239 20:03:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:31.239 20:03:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:31.239 [2024-05-16 20:03:18.373154] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:04:31.239 [2024-05-16 20:03:18.373207] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1646151 ] 00:04:31.498 EAL: No free 2048 kB hugepages reported on node 1 00:04:31.498 [2024-05-16 20:03:18.426998] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.498 [2024-05-16 20:03:18.509530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.067 20:03:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:32.067 20:03:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:04:32.067 20:03:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:32.067 20:03:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:32.067 20:03:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:32.067 [2024-05-16 20:03:19.196491] nvmf_rpc.c:2548:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:32.067 request: 00:04:32.067 { 00:04:32.067 "trtype": "tcp", 00:04:32.067 "method": "nvmf_get_transports", 00:04:32.067 "req_id": 1 00:04:32.067 } 00:04:32.067 Got JSON-RPC error response 00:04:32.067 response: 00:04:32.067 { 00:04:32.067 "code": -19, 00:04:32.067 "message": "No such device" 00:04:32.067 } 00:04:32.067 20:03:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:32.067 20:03:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:32.067 20:03:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:32.067 20:03:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:32.067 [2024-05-16 20:03:19.204567] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:32.067 20:03:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:32.067 20:03:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:32.067 20:03:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:32.067 20:03:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:32.326 20:03:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:32.326 20:03:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:04:32.326 { 00:04:32.326 "subsystems": [ 00:04:32.326 { 00:04:32.326 "subsystem": "scheduler", 00:04:32.326 "config": [ 00:04:32.326 { 00:04:32.326 "method": "framework_set_scheduler", 00:04:32.326 "params": { 00:04:32.326 "name": "static" 00:04:32.326 } 00:04:32.326 } 00:04:32.326 ] 00:04:32.326 }, 00:04:32.326 { 00:04:32.326 "subsystem": "vmd", 00:04:32.326 "config": [] 00:04:32.326 }, 00:04:32.326 { 00:04:32.326 "subsystem": "sock", 00:04:32.326 "config": [ 00:04:32.326 { 00:04:32.326 "method": "sock_set_default_impl", 00:04:32.326 "params": { 00:04:32.326 "impl_name": "posix" 00:04:32.326 } 00:04:32.326 }, 00:04:32.326 { 00:04:32.326 "method": "sock_impl_set_options", 00:04:32.326 "params": { 00:04:32.326 "impl_name": "ssl", 00:04:32.326 "recv_buf_size": 4096, 00:04:32.326 "send_buf_size": 4096, 00:04:32.326 "enable_recv_pipe": true, 00:04:32.326 "enable_quickack": false, 00:04:32.326 "enable_placement_id": 0, 00:04:32.326 "enable_zerocopy_send_server": true, 00:04:32.326 "enable_zerocopy_send_client": false, 00:04:32.326 "zerocopy_threshold": 0, 00:04:32.326 "tls_version": 0, 00:04:32.326 "enable_ktls": false 00:04:32.326 } 00:04:32.326 }, 00:04:32.326 { 00:04:32.326 "method": "sock_impl_set_options", 00:04:32.326 "params": { 00:04:32.326 "impl_name": "posix", 00:04:32.326 "recv_buf_size": 2097152, 00:04:32.326 "send_buf_size": 2097152, 00:04:32.326 "enable_recv_pipe": true, 00:04:32.326 "enable_quickack": false, 00:04:32.326 "enable_placement_id": 0, 00:04:32.326 "enable_zerocopy_send_server": true, 00:04:32.326 "enable_zerocopy_send_client": false, 00:04:32.326 "zerocopy_threshold": 0, 00:04:32.326 "tls_version": 0, 00:04:32.326 "enable_ktls": false 00:04:32.326 } 00:04:32.326 } 00:04:32.326 ] 00:04:32.326 }, 00:04:32.326 { 00:04:32.326 "subsystem": "iobuf", 00:04:32.326 "config": [ 00:04:32.326 { 00:04:32.326 "method": "iobuf_set_options", 00:04:32.326 "params": { 00:04:32.326 "small_pool_count": 8192, 00:04:32.326 "large_pool_count": 1024, 00:04:32.326 "small_bufsize": 8192, 00:04:32.326 "large_bufsize": 135168 00:04:32.326 } 00:04:32.326 } 00:04:32.326 ] 00:04:32.326 }, 00:04:32.326 { 00:04:32.326 "subsystem": "keyring", 00:04:32.326 "config": [] 00:04:32.326 }, 00:04:32.326 { 00:04:32.326 "subsystem": "vfio_user_target", 00:04:32.326 "config": null 00:04:32.326 }, 00:04:32.326 { 00:04:32.326 "subsystem": "accel", 00:04:32.326 "config": [ 00:04:32.326 { 00:04:32.326 "method": "accel_set_options", 00:04:32.326 "params": { 00:04:32.326 "small_cache_size": 128, 00:04:32.326 "large_cache_size": 16, 00:04:32.326 "task_count": 2048, 00:04:32.326 "sequence_count": 2048, 00:04:32.326 "buf_count": 2048 00:04:32.326 } 00:04:32.326 } 00:04:32.326 ] 00:04:32.326 }, 00:04:32.326 { 00:04:32.326 "subsystem": "bdev", 00:04:32.326 "config": [ 00:04:32.326 { 00:04:32.327 "method": "bdev_set_options", 00:04:32.327 "params": { 00:04:32.327 "bdev_io_pool_size": 65535, 00:04:32.327 "bdev_io_cache_size": 256, 00:04:32.327 "bdev_auto_examine": true, 00:04:32.327 "iobuf_small_cache_size": 128, 00:04:32.327 "iobuf_large_cache_size": 16 00:04:32.327 } 00:04:32.327 }, 00:04:32.327 { 00:04:32.327 "method": "bdev_raid_set_options", 00:04:32.327 "params": { 00:04:32.327 "process_window_size_kb": 1024 00:04:32.327 } 00:04:32.327 }, 00:04:32.327 { 00:04:32.327 "method": "bdev_nvme_set_options", 00:04:32.327 "params": { 00:04:32.327 "action_on_timeout": "none", 00:04:32.327 "timeout_us": 0, 00:04:32.327 "timeout_admin_us": 0, 00:04:32.327 "keep_alive_timeout_ms": 10000, 00:04:32.327 "arbitration_burst": 0, 00:04:32.327 "low_priority_weight": 0, 00:04:32.327 "medium_priority_weight": 0, 00:04:32.327 "high_priority_weight": 0, 00:04:32.327 "nvme_adminq_poll_period_us": 10000, 00:04:32.327 "nvme_ioq_poll_period_us": 0, 00:04:32.327 "io_queue_requests": 0, 00:04:32.327 "delay_cmd_submit": true, 00:04:32.327 "transport_retry_count": 4, 00:04:32.327 "bdev_retry_count": 3, 00:04:32.327 "transport_ack_timeout": 0, 00:04:32.327 "ctrlr_loss_timeout_sec": 0, 00:04:32.327 "reconnect_delay_sec": 0, 00:04:32.327 "fast_io_fail_timeout_sec": 0, 00:04:32.327 "disable_auto_failback": false, 00:04:32.327 "generate_uuids": false, 00:04:32.327 "transport_tos": 0, 00:04:32.327 "nvme_error_stat": false, 00:04:32.327 "rdma_srq_size": 0, 00:04:32.327 "io_path_stat": false, 00:04:32.327 "allow_accel_sequence": false, 00:04:32.327 "rdma_max_cq_size": 0, 00:04:32.327 "rdma_cm_event_timeout_ms": 0, 00:04:32.327 "dhchap_digests": [ 00:04:32.327 "sha256", 00:04:32.327 "sha384", 00:04:32.327 "sha512" 00:04:32.327 ], 00:04:32.327 "dhchap_dhgroups": [ 00:04:32.327 "null", 00:04:32.327 "ffdhe2048", 00:04:32.327 "ffdhe3072", 00:04:32.327 "ffdhe4096", 00:04:32.327 "ffdhe6144", 00:04:32.327 "ffdhe8192" 00:04:32.327 ] 00:04:32.327 } 00:04:32.327 }, 00:04:32.327 { 00:04:32.327 "method": "bdev_nvme_set_hotplug", 00:04:32.327 "params": { 00:04:32.327 "period_us": 100000, 00:04:32.327 "enable": false 00:04:32.327 } 00:04:32.327 }, 00:04:32.327 { 00:04:32.327 "method": "bdev_iscsi_set_options", 00:04:32.327 "params": { 00:04:32.327 "timeout_sec": 30 00:04:32.327 } 00:04:32.327 }, 00:04:32.327 { 00:04:32.327 "method": "bdev_wait_for_examine" 00:04:32.327 } 00:04:32.327 ] 00:04:32.327 }, 00:04:32.327 { 00:04:32.327 "subsystem": "nvmf", 00:04:32.327 "config": [ 00:04:32.327 { 00:04:32.327 "method": "nvmf_set_config", 00:04:32.327 "params": { 00:04:32.327 "discovery_filter": "match_any", 00:04:32.327 "admin_cmd_passthru": { 00:04:32.327 "identify_ctrlr": false 00:04:32.327 } 00:04:32.327 } 00:04:32.327 }, 00:04:32.327 { 00:04:32.327 "method": "nvmf_set_max_subsystems", 00:04:32.327 "params": { 00:04:32.327 "max_subsystems": 1024 00:04:32.327 } 00:04:32.327 }, 00:04:32.327 { 00:04:32.327 "method": "nvmf_set_crdt", 00:04:32.327 "params": { 00:04:32.327 "crdt1": 0, 00:04:32.327 "crdt2": 0, 00:04:32.327 "crdt3": 0 00:04:32.327 } 00:04:32.327 }, 00:04:32.327 { 00:04:32.327 "method": "nvmf_create_transport", 00:04:32.327 "params": { 00:04:32.327 "trtype": "TCP", 00:04:32.327 "max_queue_depth": 128, 00:04:32.327 "max_io_qpairs_per_ctrlr": 127, 00:04:32.327 "in_capsule_data_size": 4096, 00:04:32.327 "max_io_size": 131072, 00:04:32.327 "io_unit_size": 131072, 00:04:32.327 "max_aq_depth": 128, 00:04:32.327 "num_shared_buffers": 511, 00:04:32.327 "buf_cache_size": 4294967295, 00:04:32.327 "dif_insert_or_strip": false, 00:04:32.327 "zcopy": false, 00:04:32.327 "c2h_success": true, 00:04:32.327 "sock_priority": 0, 00:04:32.327 "abort_timeout_sec": 1, 00:04:32.327 "ack_timeout": 0, 00:04:32.327 "data_wr_pool_size": 0 00:04:32.327 } 00:04:32.327 } 00:04:32.327 ] 00:04:32.327 }, 00:04:32.327 { 00:04:32.327 "subsystem": "nbd", 00:04:32.327 "config": [] 00:04:32.327 }, 00:04:32.327 { 00:04:32.327 "subsystem": "ublk", 00:04:32.327 "config": [] 00:04:32.327 }, 00:04:32.327 { 00:04:32.327 "subsystem": "vhost_blk", 00:04:32.327 "config": [] 00:04:32.327 }, 00:04:32.327 { 00:04:32.327 "subsystem": "scsi", 00:04:32.327 "config": null 00:04:32.327 }, 00:04:32.327 { 00:04:32.327 "subsystem": "iscsi", 00:04:32.327 "config": [ 00:04:32.327 { 00:04:32.327 "method": "iscsi_set_options", 00:04:32.327 "params": { 00:04:32.327 "node_base": "iqn.2016-06.io.spdk", 00:04:32.327 "max_sessions": 128, 00:04:32.327 "max_connections_per_session": 2, 00:04:32.327 "max_queue_depth": 64, 00:04:32.327 "default_time2wait": 2, 00:04:32.327 "default_time2retain": 20, 00:04:32.327 "first_burst_length": 8192, 00:04:32.327 "immediate_data": true, 00:04:32.327 "allow_duplicated_isid": false, 00:04:32.327 "error_recovery_level": 0, 00:04:32.327 "nop_timeout": 60, 00:04:32.327 "nop_in_interval": 30, 00:04:32.327 "disable_chap": false, 00:04:32.327 "require_chap": false, 00:04:32.327 "mutual_chap": false, 00:04:32.327 "chap_group": 0, 00:04:32.327 "max_large_datain_per_connection": 64, 00:04:32.327 "max_r2t_per_connection": 4, 00:04:32.327 "pdu_pool_size": 36864, 00:04:32.327 "immediate_data_pool_size": 16384, 00:04:32.327 "data_out_pool_size": 2048 00:04:32.327 } 00:04:32.327 } 00:04:32.327 ] 00:04:32.327 }, 00:04:32.327 { 00:04:32.327 "subsystem": "vhost_scsi", 00:04:32.327 "config": [] 00:04:32.327 } 00:04:32.327 ] 00:04:32.327 } 00:04:32.327 20:03:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:32.327 20:03:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1646151 00:04:32.327 20:03:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 1646151 ']' 00:04:32.327 20:03:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 1646151 00:04:32.327 20:03:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:04:32.327 20:03:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:32.327 20:03:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1646151 00:04:32.327 20:03:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:32.327 20:03:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:32.327 20:03:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1646151' 00:04:32.327 killing process with pid 1646151 00:04:32.327 20:03:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 1646151 00:04:32.327 20:03:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 1646151 00:04:32.587 20:03:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1646370 00:04:32.587 20:03:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:32.587 20:03:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:04:37.861 20:03:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1646370 00:04:37.861 20:03:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 1646370 ']' 00:04:37.861 20:03:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 1646370 00:04:37.861 20:03:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:04:37.861 20:03:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:37.861 20:03:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1646370 00:04:37.861 20:03:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:37.861 20:03:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:37.861 20:03:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1646370' 00:04:37.861 killing process with pid 1646370 00:04:37.861 20:03:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 1646370 00:04:37.861 20:03:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 1646370 00:04:38.120 20:03:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:04:38.120 20:03:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:04:38.120 00:04:38.120 real 0m6.748s 00:04:38.120 user 0m6.535s 00:04:38.120 sys 0m0.588s 00:04:38.120 20:03:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:38.120 20:03:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:38.120 ************************************ 00:04:38.120 END TEST skip_rpc_with_json 00:04:38.120 ************************************ 00:04:38.120 20:03:25 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:38.120 20:03:25 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:38.120 20:03:25 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:38.120 20:03:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.120 ************************************ 00:04:38.120 START TEST skip_rpc_with_delay 00:04:38.120 ************************************ 00:04:38.120 20:03:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:04:38.120 20:03:25 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:38.120 20:03:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:38.120 20:03:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:38.120 20:03:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:38.120 20:03:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:38.120 20:03:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:38.120 20:03:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:38.120 20:03:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:38.120 20:03:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:38.120 20:03:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:38.120 20:03:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:38.120 20:03:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:38.120 [2024-05-16 20:03:25.197009] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:38.120 [2024-05-16 20:03:25.197148] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:38.120 20:03:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:38.120 20:03:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:38.120 20:03:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:38.120 20:03:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:38.120 00:04:38.120 real 0m0.040s 00:04:38.120 user 0m0.017s 00:04:38.120 sys 0m0.023s 00:04:38.120 20:03:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:38.120 20:03:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:38.120 ************************************ 00:04:38.120 END TEST skip_rpc_with_delay 00:04:38.120 ************************************ 00:04:38.120 20:03:25 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:38.120 20:03:25 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:38.120 20:03:25 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:38.120 20:03:25 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:38.120 20:03:25 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:38.120 20:03:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.379 ************************************ 00:04:38.379 START TEST exit_on_failed_rpc_init 00:04:38.379 ************************************ 00:04:38.379 20:03:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:04:38.379 20:03:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1647312 00:04:38.379 20:03:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1647312 00:04:38.379 20:03:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 1647312 ']' 00:04:38.379 20:03:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.379 20:03:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:38.379 20:03:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.379 20:03:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:38.379 20:03:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:38.379 20:03:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:38.379 [2024-05-16 20:03:25.310265] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:04:38.379 [2024-05-16 20:03:25.310332] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1647312 ] 00:04:38.379 EAL: No free 2048 kB hugepages reported on node 1 00:04:38.379 [2024-05-16 20:03:25.366683] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.379 [2024-05-16 20:03:25.449929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.316 20:03:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:39.316 20:03:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:04:39.316 20:03:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:39.316 20:03:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:39.316 20:03:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:39.316 20:03:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:39.316 20:03:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:39.316 20:03:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:39.316 20:03:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:39.316 20:03:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:39.316 20:03:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:39.316 20:03:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:39.316 20:03:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:39.316 20:03:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:39.316 20:03:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:39.316 [2024-05-16 20:03:26.145081] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:04:39.316 [2024-05-16 20:03:26.145125] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1647519 ] 00:04:39.316 EAL: No free 2048 kB hugepages reported on node 1 00:04:39.316 [2024-05-16 20:03:26.195784] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.316 [2024-05-16 20:03:26.273441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:39.316 [2024-05-16 20:03:26.273521] rpc.c: 181:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:39.316 [2024-05-16 20:03:26.273531] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:39.316 [2024-05-16 20:03:26.273537] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:39.316 20:03:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:39.316 20:03:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:39.316 20:03:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:39.316 20:03:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:39.316 20:03:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:39.316 20:03:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:39.316 20:03:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:39.316 20:03:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1647312 00:04:39.316 20:03:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 1647312 ']' 00:04:39.316 20:03:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 1647312 00:04:39.316 20:03:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:04:39.316 20:03:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:39.316 20:03:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1647312 00:04:39.316 20:03:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:39.316 20:03:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:39.316 20:03:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1647312' 00:04:39.316 killing process with pid 1647312 00:04:39.316 20:03:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 1647312 00:04:39.316 20:03:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 1647312 00:04:39.575 00:04:39.575 real 0m1.414s 00:04:39.575 user 0m1.578s 00:04:39.575 sys 0m0.400s 00:04:39.575 20:03:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:39.575 20:03:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:39.575 ************************************ 00:04:39.575 END TEST exit_on_failed_rpc_init 00:04:39.575 ************************************ 00:04:39.834 20:03:26 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:04:39.834 00:04:39.834 real 0m13.945s 00:04:39.834 user 0m13.394s 00:04:39.834 sys 0m1.528s 00:04:39.834 20:03:26 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:39.834 20:03:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.834 ************************************ 00:04:39.834 END TEST skip_rpc 00:04:39.834 ************************************ 00:04:39.834 20:03:26 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:39.834 20:03:26 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:39.834 20:03:26 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:39.834 20:03:26 -- common/autotest_common.sh@10 -- # set +x 00:04:39.834 ************************************ 00:04:39.834 START TEST rpc_client 00:04:39.834 ************************************ 00:04:39.834 20:03:26 rpc_client -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:39.834 * Looking for test storage... 00:04:39.834 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client 00:04:39.834 20:03:26 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:39.834 OK 00:04:39.834 20:03:26 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:39.834 00:04:39.834 real 0m0.108s 00:04:39.834 user 0m0.046s 00:04:39.834 sys 0m0.069s 00:04:39.834 20:03:26 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:39.834 20:03:26 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:39.834 ************************************ 00:04:39.834 END TEST rpc_client 00:04:39.834 ************************************ 00:04:39.834 20:03:26 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:04:39.834 20:03:26 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:39.834 20:03:26 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:39.834 20:03:26 -- common/autotest_common.sh@10 -- # set +x 00:04:39.834 ************************************ 00:04:39.834 START TEST json_config 00:04:39.834 ************************************ 00:04:39.834 20:03:26 json_config -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:04:40.095 20:03:27 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:04:40.095 20:03:27 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:40.095 20:03:27 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:40.095 20:03:27 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:40.095 20:03:27 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:40.095 20:03:27 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:40.095 20:03:27 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:40.095 20:03:27 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:40.095 20:03:27 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:40.095 20:03:27 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:40.095 20:03:27 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:40.095 20:03:27 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:40.095 20:03:27 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8089bee2-271d-eb11-906e-0017a4403562 00:04:40.095 20:03:27 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=8089bee2-271d-eb11-906e-0017a4403562 00:04:40.095 20:03:27 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:40.095 20:03:27 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:40.095 20:03:27 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:40.095 20:03:27 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:40.095 20:03:27 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:04:40.095 20:03:27 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:40.095 20:03:27 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:40.095 20:03:27 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:40.095 20:03:27 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.095 20:03:27 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.095 20:03:27 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.095 20:03:27 json_config -- paths/export.sh@5 -- # export PATH 00:04:40.095 20:03:27 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.095 20:03:27 json_config -- nvmf/common.sh@47 -- # : 0 00:04:40.095 20:03:27 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:40.095 20:03:27 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:40.095 20:03:27 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:40.095 20:03:27 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:40.095 20:03:27 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:40.095 20:03:27 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:40.095 20:03:27 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:40.095 20:03:27 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:40.095 20:03:27 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:04:40.095 20:03:27 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:40.095 20:03:27 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:40.095 20:03:27 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:40.095 20:03:27 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:40.095 20:03:27 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:40.095 WARNING: No tests are enabled so not running JSON configuration tests 00:04:40.095 20:03:27 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:40.095 00:04:40.095 real 0m0.095s 00:04:40.095 user 0m0.052s 00:04:40.095 sys 0m0.044s 00:04:40.095 20:03:27 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:40.095 20:03:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:40.095 ************************************ 00:04:40.095 END TEST json_config 00:04:40.095 ************************************ 00:04:40.095 20:03:27 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:40.095 20:03:27 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:40.095 20:03:27 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:40.095 20:03:27 -- common/autotest_common.sh@10 -- # set +x 00:04:40.095 ************************************ 00:04:40.095 START TEST json_config_extra_key 00:04:40.095 ************************************ 00:04:40.095 20:03:27 json_config_extra_key -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:40.095 20:03:27 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:04:40.095 20:03:27 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:40.095 20:03:27 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:40.095 20:03:27 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:40.095 20:03:27 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:40.095 20:03:27 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:40.095 20:03:27 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:40.095 20:03:27 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:40.095 20:03:27 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:40.095 20:03:27 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:40.095 20:03:27 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:40.095 20:03:27 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:40.095 20:03:27 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8089bee2-271d-eb11-906e-0017a4403562 00:04:40.095 20:03:27 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=8089bee2-271d-eb11-906e-0017a4403562 00:04:40.095 20:03:27 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:40.095 20:03:27 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:40.095 20:03:27 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:40.095 20:03:27 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:40.095 20:03:27 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:04:40.095 20:03:27 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:40.095 20:03:27 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:40.095 20:03:27 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:40.095 20:03:27 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.095 20:03:27 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.095 20:03:27 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.095 20:03:27 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:40.095 20:03:27 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.095 20:03:27 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:40.095 20:03:27 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:40.095 20:03:27 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:40.095 20:03:27 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:40.095 20:03:27 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:40.095 20:03:27 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:40.096 20:03:27 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:40.096 20:03:27 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:40.096 20:03:27 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:40.096 20:03:27 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:04:40.096 20:03:27 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:40.096 20:03:27 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:40.096 20:03:27 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:40.096 20:03:27 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:40.096 20:03:27 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:40.096 20:03:27 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:40.096 20:03:27 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:40.096 20:03:27 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:40.096 20:03:27 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:40.096 20:03:27 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:40.096 INFO: launching applications... 00:04:40.096 20:03:27 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:04:40.096 20:03:27 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:40.096 20:03:27 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:40.096 20:03:27 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:40.096 20:03:27 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:40.096 20:03:27 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:40.096 20:03:27 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:40.096 20:03:27 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:40.096 20:03:27 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1647879 00:04:40.096 20:03:27 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:40.096 Waiting for target to run... 00:04:40.096 20:03:27 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1647879 /var/tmp/spdk_tgt.sock 00:04:40.096 20:03:27 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:04:40.096 20:03:27 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 1647879 ']' 00:04:40.096 20:03:27 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:40.096 20:03:27 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:40.096 20:03:27 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:40.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:40.096 20:03:27 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:40.096 20:03:27 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:40.096 [2024-05-16 20:03:27.237824] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:04:40.096 [2024-05-16 20:03:27.237904] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1647879 ] 00:04:40.356 EAL: No free 2048 kB hugepages reported on node 1 00:04:40.615 [2024-05-16 20:03:27.513038] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.615 [2024-05-16 20:03:27.577808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.183 20:03:28 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:41.183 20:03:28 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:04:41.183 20:03:28 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:41.183 00:04:41.183 20:03:28 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:41.183 INFO: shutting down applications... 00:04:41.183 20:03:28 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:41.183 20:03:28 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:41.183 20:03:28 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:41.183 20:03:28 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1647879 ]] 00:04:41.183 20:03:28 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1647879 00:04:41.183 20:03:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:41.183 20:03:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:41.183 20:03:28 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1647879 00:04:41.183 20:03:28 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:41.442 20:03:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:41.442 20:03:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:41.442 20:03:28 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1647879 00:04:41.442 20:03:28 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:41.442 20:03:28 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:41.442 20:03:28 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:41.442 20:03:28 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:41.442 SPDK target shutdown done 00:04:41.442 20:03:28 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:41.442 Success 00:04:41.442 00:04:41.442 real 0m1.432s 00:04:41.442 user 0m1.251s 00:04:41.442 sys 0m0.352s 00:04:41.442 20:03:28 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:41.442 20:03:28 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:41.442 ************************************ 00:04:41.442 END TEST json_config_extra_key 00:04:41.442 ************************************ 00:04:41.701 20:03:28 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:41.701 20:03:28 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:41.701 20:03:28 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:41.701 20:03:28 -- common/autotest_common.sh@10 -- # set +x 00:04:41.701 ************************************ 00:04:41.701 START TEST alias_rpc 00:04:41.701 ************************************ 00:04:41.701 20:03:28 alias_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:41.701 * Looking for test storage... 00:04:41.701 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc 00:04:41.701 20:03:28 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:41.701 20:03:28 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1648151 00:04:41.701 20:03:28 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:41.701 20:03:28 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1648151 00:04:41.701 20:03:28 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 1648151 ']' 00:04:41.701 20:03:28 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.702 20:03:28 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:41.702 20:03:28 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.702 20:03:28 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:41.702 20:03:28 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.702 [2024-05-16 20:03:28.727366] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:04:41.702 [2024-05-16 20:03:28.727427] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1648151 ] 00:04:41.702 EAL: No free 2048 kB hugepages reported on node 1 00:04:41.702 [2024-05-16 20:03:28.781371] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.961 [2024-05-16 20:03:28.865250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.961 20:03:29 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:41.961 20:03:29 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:04:41.961 20:03:29 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:42.220 20:03:29 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1648151 00:04:42.220 20:03:29 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 1648151 ']' 00:04:42.220 20:03:29 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 1648151 00:04:42.220 20:03:29 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:04:42.220 20:03:29 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:42.220 20:03:29 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1648151 00:04:42.220 20:03:29 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:42.220 20:03:29 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:42.220 20:03:29 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1648151' 00:04:42.220 killing process with pid 1648151 00:04:42.220 20:03:29 alias_rpc -- common/autotest_common.sh@965 -- # kill 1648151 00:04:42.220 20:03:29 alias_rpc -- common/autotest_common.sh@970 -- # wait 1648151 00:04:42.789 00:04:42.789 real 0m1.027s 00:04:42.789 user 0m1.048s 00:04:42.789 sys 0m0.359s 00:04:42.789 20:03:29 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:42.789 20:03:29 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.789 ************************************ 00:04:42.789 END TEST alias_rpc 00:04:42.789 ************************************ 00:04:42.789 20:03:29 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:42.789 20:03:29 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:42.789 20:03:29 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:42.789 20:03:29 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:42.789 20:03:29 -- common/autotest_common.sh@10 -- # set +x 00:04:42.789 ************************************ 00:04:42.789 START TEST spdkcli_tcp 00:04:42.789 ************************************ 00:04:42.789 20:03:29 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:42.789 * Looking for test storage... 00:04:42.789 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli 00:04:42.789 20:03:29 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/common.sh 00:04:42.789 20:03:29 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:42.789 20:03:29 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/clear_config.py 00:04:42.789 20:03:29 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:42.789 20:03:29 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:42.789 20:03:29 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:42.789 20:03:29 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:42.789 20:03:29 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:42.789 20:03:29 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:42.789 20:03:29 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1648422 00:04:42.789 20:03:29 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1648422 00:04:42.789 20:03:29 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:42.789 20:03:29 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 1648422 ']' 00:04:42.789 20:03:29 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.789 20:03:29 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:42.789 20:03:29 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.789 20:03:29 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:42.789 20:03:29 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:42.789 [2024-05-16 20:03:29.834133] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:04:42.789 [2024-05-16 20:03:29.834208] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1648422 ] 00:04:42.789 EAL: No free 2048 kB hugepages reported on node 1 00:04:42.789 [2024-05-16 20:03:29.890236] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:43.049 [2024-05-16 20:03:29.971955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:43.049 [2024-05-16 20:03:29.971959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.616 20:03:30 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:43.616 20:03:30 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:04:43.616 20:03:30 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:43.616 20:03:30 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1648435 00:04:43.616 20:03:30 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:43.876 [ 00:04:43.876 "spdk_get_version", 00:04:43.876 "rpc_get_methods", 00:04:43.876 "trace_get_info", 00:04:43.876 "trace_get_tpoint_group_mask", 00:04:43.876 "trace_disable_tpoint_group", 00:04:43.876 "trace_enable_tpoint_group", 00:04:43.876 "trace_clear_tpoint_mask", 00:04:43.876 "trace_set_tpoint_mask", 00:04:43.876 "vfu_tgt_set_base_path", 00:04:43.876 "framework_get_pci_devices", 00:04:43.876 "framework_get_config", 00:04:43.876 "framework_get_subsystems", 00:04:43.876 "keyring_get_keys", 00:04:43.876 "iobuf_get_stats", 00:04:43.876 "iobuf_set_options", 00:04:43.876 "sock_get_default_impl", 00:04:43.876 "sock_set_default_impl", 00:04:43.876 "sock_impl_set_options", 00:04:43.876 "sock_impl_get_options", 00:04:43.876 "vmd_rescan", 00:04:43.876 "vmd_remove_device", 00:04:43.876 "vmd_enable", 00:04:43.876 "accel_get_stats", 00:04:43.876 "accel_set_options", 00:04:43.876 "accel_set_driver", 00:04:43.876 "accel_crypto_key_destroy", 00:04:43.876 "accel_crypto_keys_get", 00:04:43.876 "accel_crypto_key_create", 00:04:43.876 "accel_assign_opc", 00:04:43.876 "accel_get_module_info", 00:04:43.876 "accel_get_opc_assignments", 00:04:43.876 "notify_get_notifications", 00:04:43.876 "notify_get_types", 00:04:43.876 "bdev_get_histogram", 00:04:43.876 "bdev_enable_histogram", 00:04:43.876 "bdev_set_qos_limit", 00:04:43.876 "bdev_set_qd_sampling_period", 00:04:43.876 "bdev_get_bdevs", 00:04:43.876 "bdev_reset_iostat", 00:04:43.876 "bdev_get_iostat", 00:04:43.876 "bdev_examine", 00:04:43.876 "bdev_wait_for_examine", 00:04:43.876 "bdev_set_options", 00:04:43.876 "scsi_get_devices", 00:04:43.876 "thread_set_cpumask", 00:04:43.876 "framework_get_scheduler", 00:04:43.876 "framework_set_scheduler", 00:04:43.876 "framework_get_reactors", 00:04:43.876 "thread_get_io_channels", 00:04:43.876 "thread_get_pollers", 00:04:43.876 "thread_get_stats", 00:04:43.876 "framework_monitor_context_switch", 00:04:43.876 "spdk_kill_instance", 00:04:43.876 "log_enable_timestamps", 00:04:43.876 "log_get_flags", 00:04:43.876 "log_clear_flag", 00:04:43.876 "log_set_flag", 00:04:43.876 "log_get_level", 00:04:43.876 "log_set_level", 00:04:43.876 "log_get_print_level", 00:04:43.876 "log_set_print_level", 00:04:43.876 "framework_enable_cpumask_locks", 00:04:43.876 "framework_disable_cpumask_locks", 00:04:43.876 "framework_wait_init", 00:04:43.876 "framework_start_init", 00:04:43.876 "virtio_blk_create_transport", 00:04:43.876 "virtio_blk_get_transports", 00:04:43.876 "vhost_controller_set_coalescing", 00:04:43.876 "vhost_get_controllers", 00:04:43.876 "vhost_delete_controller", 00:04:43.876 "vhost_create_blk_controller", 00:04:43.876 "vhost_scsi_controller_remove_target", 00:04:43.876 "vhost_scsi_controller_add_target", 00:04:43.876 "vhost_start_scsi_controller", 00:04:43.876 "vhost_create_scsi_controller", 00:04:43.876 "ublk_recover_disk", 00:04:43.876 "ublk_get_disks", 00:04:43.876 "ublk_stop_disk", 00:04:43.876 "ublk_start_disk", 00:04:43.876 "ublk_destroy_target", 00:04:43.876 "ublk_create_target", 00:04:43.876 "nbd_get_disks", 00:04:43.876 "nbd_stop_disk", 00:04:43.876 "nbd_start_disk", 00:04:43.876 "env_dpdk_get_mem_stats", 00:04:43.876 "nvmf_stop_mdns_prr", 00:04:43.876 "nvmf_publish_mdns_prr", 00:04:43.876 "nvmf_subsystem_get_listeners", 00:04:43.876 "nvmf_subsystem_get_qpairs", 00:04:43.876 "nvmf_subsystem_get_controllers", 00:04:43.876 "nvmf_get_stats", 00:04:43.876 "nvmf_get_transports", 00:04:43.876 "nvmf_create_transport", 00:04:43.876 "nvmf_get_targets", 00:04:43.876 "nvmf_delete_target", 00:04:43.876 "nvmf_create_target", 00:04:43.876 "nvmf_subsystem_allow_any_host", 00:04:43.876 "nvmf_subsystem_remove_host", 00:04:43.876 "nvmf_subsystem_add_host", 00:04:43.876 "nvmf_ns_remove_host", 00:04:43.876 "nvmf_ns_add_host", 00:04:43.876 "nvmf_subsystem_remove_ns", 00:04:43.876 "nvmf_subsystem_add_ns", 00:04:43.876 "nvmf_subsystem_listener_set_ana_state", 00:04:43.876 "nvmf_discovery_get_referrals", 00:04:43.876 "nvmf_discovery_remove_referral", 00:04:43.876 "nvmf_discovery_add_referral", 00:04:43.876 "nvmf_subsystem_remove_listener", 00:04:43.876 "nvmf_subsystem_add_listener", 00:04:43.876 "nvmf_delete_subsystem", 00:04:43.876 "nvmf_create_subsystem", 00:04:43.876 "nvmf_get_subsystems", 00:04:43.876 "nvmf_set_crdt", 00:04:43.876 "nvmf_set_config", 00:04:43.876 "nvmf_set_max_subsystems", 00:04:43.876 "iscsi_get_histogram", 00:04:43.876 "iscsi_enable_histogram", 00:04:43.876 "iscsi_set_options", 00:04:43.876 "iscsi_get_auth_groups", 00:04:43.876 "iscsi_auth_group_remove_secret", 00:04:43.876 "iscsi_auth_group_add_secret", 00:04:43.876 "iscsi_delete_auth_group", 00:04:43.876 "iscsi_create_auth_group", 00:04:43.876 "iscsi_set_discovery_auth", 00:04:43.876 "iscsi_get_options", 00:04:43.876 "iscsi_target_node_request_logout", 00:04:43.876 "iscsi_target_node_set_redirect", 00:04:43.876 "iscsi_target_node_set_auth", 00:04:43.876 "iscsi_target_node_add_lun", 00:04:43.876 "iscsi_get_stats", 00:04:43.876 "iscsi_get_connections", 00:04:43.876 "iscsi_portal_group_set_auth", 00:04:43.876 "iscsi_start_portal_group", 00:04:43.876 "iscsi_delete_portal_group", 00:04:43.876 "iscsi_create_portal_group", 00:04:43.876 "iscsi_get_portal_groups", 00:04:43.876 "iscsi_delete_target_node", 00:04:43.876 "iscsi_target_node_remove_pg_ig_maps", 00:04:43.876 "iscsi_target_node_add_pg_ig_maps", 00:04:43.876 "iscsi_create_target_node", 00:04:43.876 "iscsi_get_target_nodes", 00:04:43.876 "iscsi_delete_initiator_group", 00:04:43.876 "iscsi_initiator_group_remove_initiators", 00:04:43.877 "iscsi_initiator_group_add_initiators", 00:04:43.877 "iscsi_create_initiator_group", 00:04:43.877 "iscsi_get_initiator_groups", 00:04:43.877 "keyring_file_remove_key", 00:04:43.877 "keyring_file_add_key", 00:04:43.877 "vfu_virtio_create_scsi_endpoint", 00:04:43.877 "vfu_virtio_scsi_remove_target", 00:04:43.877 "vfu_virtio_scsi_add_target", 00:04:43.877 "vfu_virtio_create_blk_endpoint", 00:04:43.877 "vfu_virtio_delete_endpoint", 00:04:43.877 "iaa_scan_accel_module", 00:04:43.877 "dsa_scan_accel_module", 00:04:43.877 "ioat_scan_accel_module", 00:04:43.877 "accel_error_inject_error", 00:04:43.877 "bdev_iscsi_delete", 00:04:43.877 "bdev_iscsi_create", 00:04:43.877 "bdev_iscsi_set_options", 00:04:43.877 "bdev_virtio_attach_controller", 00:04:43.877 "bdev_virtio_scsi_get_devices", 00:04:43.877 "bdev_virtio_detach_controller", 00:04:43.877 "bdev_virtio_blk_set_hotplug", 00:04:43.877 "bdev_ftl_set_property", 00:04:43.877 "bdev_ftl_get_properties", 00:04:43.877 "bdev_ftl_get_stats", 00:04:43.877 "bdev_ftl_unmap", 00:04:43.877 "bdev_ftl_unload", 00:04:43.877 "bdev_ftl_delete", 00:04:43.877 "bdev_ftl_load", 00:04:43.877 "bdev_ftl_create", 00:04:43.877 "bdev_aio_delete", 00:04:43.877 "bdev_aio_rescan", 00:04:43.877 "bdev_aio_create", 00:04:43.877 "blobfs_create", 00:04:43.877 "blobfs_detect", 00:04:43.877 "blobfs_set_cache_size", 00:04:43.877 "bdev_zone_block_delete", 00:04:43.877 "bdev_zone_block_create", 00:04:43.877 "bdev_delay_delete", 00:04:43.877 "bdev_delay_create", 00:04:43.877 "bdev_delay_update_latency", 00:04:43.877 "bdev_split_delete", 00:04:43.877 "bdev_split_create", 00:04:43.877 "bdev_error_inject_error", 00:04:43.877 "bdev_error_delete", 00:04:43.877 "bdev_error_create", 00:04:43.877 "bdev_raid_set_options", 00:04:43.877 "bdev_raid_remove_base_bdev", 00:04:43.877 "bdev_raid_add_base_bdev", 00:04:43.877 "bdev_raid_delete", 00:04:43.877 "bdev_raid_create", 00:04:43.877 "bdev_raid_get_bdevs", 00:04:43.877 "bdev_lvol_set_parent_bdev", 00:04:43.877 "bdev_lvol_set_parent", 00:04:43.877 "bdev_lvol_check_shallow_copy", 00:04:43.877 "bdev_lvol_start_shallow_copy", 00:04:43.877 "bdev_lvol_grow_lvstore", 00:04:43.877 "bdev_lvol_get_lvols", 00:04:43.877 "bdev_lvol_get_lvstores", 00:04:43.877 "bdev_lvol_delete", 00:04:43.877 "bdev_lvol_set_read_only", 00:04:43.877 "bdev_lvol_resize", 00:04:43.877 "bdev_lvol_decouple_parent", 00:04:43.877 "bdev_lvol_inflate", 00:04:43.877 "bdev_lvol_rename", 00:04:43.877 "bdev_lvol_clone_bdev", 00:04:43.877 "bdev_lvol_clone", 00:04:43.877 "bdev_lvol_snapshot", 00:04:43.877 "bdev_lvol_create", 00:04:43.877 "bdev_lvol_delete_lvstore", 00:04:43.877 "bdev_lvol_rename_lvstore", 00:04:43.877 "bdev_lvol_create_lvstore", 00:04:43.877 "bdev_passthru_delete", 00:04:43.877 "bdev_passthru_create", 00:04:43.877 "bdev_nvme_cuse_unregister", 00:04:43.877 "bdev_nvme_cuse_register", 00:04:43.877 "bdev_opal_new_user", 00:04:43.877 "bdev_opal_set_lock_state", 00:04:43.877 "bdev_opal_delete", 00:04:43.877 "bdev_opal_get_info", 00:04:43.877 "bdev_opal_create", 00:04:43.877 "bdev_nvme_opal_revert", 00:04:43.877 "bdev_nvme_opal_init", 00:04:43.877 "bdev_nvme_send_cmd", 00:04:43.877 "bdev_nvme_get_path_iostat", 00:04:43.877 "bdev_nvme_get_mdns_discovery_info", 00:04:43.877 "bdev_nvme_stop_mdns_discovery", 00:04:43.877 "bdev_nvme_start_mdns_discovery", 00:04:43.877 "bdev_nvme_set_multipath_policy", 00:04:43.877 "bdev_nvme_set_preferred_path", 00:04:43.877 "bdev_nvme_get_io_paths", 00:04:43.877 "bdev_nvme_remove_error_injection", 00:04:43.877 "bdev_nvme_add_error_injection", 00:04:43.877 "bdev_nvme_get_discovery_info", 00:04:43.877 "bdev_nvme_stop_discovery", 00:04:43.877 "bdev_nvme_start_discovery", 00:04:43.877 "bdev_nvme_get_controller_health_info", 00:04:43.877 "bdev_nvme_disable_controller", 00:04:43.877 "bdev_nvme_enable_controller", 00:04:43.877 "bdev_nvme_reset_controller", 00:04:43.877 "bdev_nvme_get_transport_statistics", 00:04:43.877 "bdev_nvme_apply_firmware", 00:04:43.877 "bdev_nvme_detach_controller", 00:04:43.877 "bdev_nvme_get_controllers", 00:04:43.877 "bdev_nvme_attach_controller", 00:04:43.877 "bdev_nvme_set_hotplug", 00:04:43.877 "bdev_nvme_set_options", 00:04:43.877 "bdev_null_resize", 00:04:43.877 "bdev_null_delete", 00:04:43.877 "bdev_null_create", 00:04:43.877 "bdev_malloc_delete", 00:04:43.877 "bdev_malloc_create" 00:04:43.877 ] 00:04:43.877 20:03:30 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:43.877 20:03:30 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:43.877 20:03:30 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:43.877 20:03:30 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:43.877 20:03:30 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1648422 00:04:43.877 20:03:30 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 1648422 ']' 00:04:43.877 20:03:30 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 1648422 00:04:43.877 20:03:30 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:04:43.877 20:03:30 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:43.877 20:03:30 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1648422 00:04:43.877 20:03:30 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:43.877 20:03:30 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:43.877 20:03:30 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1648422' 00:04:43.877 killing process with pid 1648422 00:04:43.877 20:03:30 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 1648422 00:04:43.877 20:03:30 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 1648422 00:04:44.137 00:04:44.137 real 0m1.514s 00:04:44.137 user 0m2.875s 00:04:44.137 sys 0m0.422s 00:04:44.137 20:03:31 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:44.137 20:03:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:44.137 ************************************ 00:04:44.137 END TEST spdkcli_tcp 00:04:44.137 ************************************ 00:04:44.137 20:03:31 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:44.137 20:03:31 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:44.137 20:03:31 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:44.137 20:03:31 -- common/autotest_common.sh@10 -- # set +x 00:04:44.396 ************************************ 00:04:44.396 START TEST dpdk_mem_utility 00:04:44.396 ************************************ 00:04:44.396 20:03:31 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:44.396 * Looking for test storage... 00:04:44.396 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility 00:04:44.396 20:03:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:44.396 20:03:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1648715 00:04:44.396 20:03:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1648715 00:04:44.396 20:03:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:44.396 20:03:31 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 1648715 ']' 00:04:44.396 20:03:31 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.396 20:03:31 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:44.396 20:03:31 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.396 20:03:31 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:44.396 20:03:31 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:44.396 [2024-05-16 20:03:31.417795] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:04:44.396 [2024-05-16 20:03:31.417868] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1648715 ] 00:04:44.396 EAL: No free 2048 kB hugepages reported on node 1 00:04:44.396 [2024-05-16 20:03:31.472951] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.655 [2024-05-16 20:03:31.550634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.224 20:03:32 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:45.224 20:03:32 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:04:45.224 20:03:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:45.224 20:03:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:45.224 20:03:32 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.224 20:03:32 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:45.224 { 00:04:45.224 "filename": "/tmp/spdk_mem_dump.txt" 00:04:45.224 } 00:04:45.224 20:03:32 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.224 20:03:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:45.224 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:45.224 1 heaps totaling size 814.000000 MiB 00:04:45.224 size: 814.000000 MiB heap id: 0 00:04:45.224 end heaps---------- 00:04:45.224 8 mempools totaling size 598.116089 MiB 00:04:45.224 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:45.224 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:45.224 size: 84.521057 MiB name: bdev_io_1648715 00:04:45.224 size: 51.011292 MiB name: evtpool_1648715 00:04:45.224 size: 50.003479 MiB name: msgpool_1648715 00:04:45.224 size: 21.763794 MiB name: PDU_Pool 00:04:45.224 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:45.224 size: 0.026123 MiB name: Session_Pool 00:04:45.224 end mempools------- 00:04:45.224 6 memzones totaling size 4.142822 MiB 00:04:45.224 size: 1.000366 MiB name: RG_ring_0_1648715 00:04:45.224 size: 1.000366 MiB name: RG_ring_1_1648715 00:04:45.224 size: 1.000366 MiB name: RG_ring_4_1648715 00:04:45.224 size: 1.000366 MiB name: RG_ring_5_1648715 00:04:45.224 size: 0.125366 MiB name: RG_ring_2_1648715 00:04:45.224 size: 0.015991 MiB name: RG_ring_3_1648715 00:04:45.224 end memzones------- 00:04:45.224 20:03:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:45.224 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:45.224 list of free elements. size: 12.519348 MiB 00:04:45.224 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:45.224 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:45.224 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:45.224 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:45.224 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:45.224 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:45.224 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:45.224 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:45.224 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:45.224 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:45.224 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:45.224 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:45.224 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:45.224 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:45.224 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:45.224 list of standard malloc elements. size: 199.218079 MiB 00:04:45.224 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:45.224 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:45.224 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:45.224 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:45.224 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:45.224 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:45.224 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:45.224 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:45.224 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:45.224 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:45.224 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:45.224 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:45.224 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:45.224 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:45.224 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:45.224 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:45.224 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:45.224 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:45.224 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:45.224 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:45.224 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:45.224 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:45.224 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:45.224 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:45.224 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:45.224 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:45.224 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:45.224 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:45.224 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:45.224 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:45.224 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:45.224 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:45.224 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:45.224 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:45.224 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:45.224 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:45.224 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:45.224 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:45.224 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:45.224 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:45.224 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:45.224 list of memzone associated elements. size: 602.262573 MiB 00:04:45.224 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:45.224 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:45.224 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:45.224 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:45.224 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:45.224 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1648715_0 00:04:45.224 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:45.224 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1648715_0 00:04:45.224 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:45.224 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1648715_0 00:04:45.224 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:45.224 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:45.224 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:45.224 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:45.224 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:45.224 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1648715 00:04:45.224 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:45.224 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1648715 00:04:45.224 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:45.224 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1648715 00:04:45.224 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:45.224 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:45.224 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:45.224 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:45.224 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:45.224 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:45.224 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:45.224 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:45.225 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:45.225 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1648715 00:04:45.225 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:45.225 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1648715 00:04:45.225 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:45.225 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1648715 00:04:45.225 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:45.225 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1648715 00:04:45.225 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:45.225 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1648715 00:04:45.225 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:45.225 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:45.225 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:45.225 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:45.225 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:45.225 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:45.225 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:45.225 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1648715 00:04:45.225 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:45.225 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:45.225 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:45.225 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:45.225 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:45.225 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1648715 00:04:45.225 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:45.225 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:45.225 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:45.225 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1648715 00:04:45.225 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:45.225 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1648715 00:04:45.225 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:45.225 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:45.225 20:03:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:45.225 20:03:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1648715 00:04:45.225 20:03:32 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 1648715 ']' 00:04:45.225 20:03:32 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 1648715 00:04:45.225 20:03:32 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:04:45.225 20:03:32 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:45.225 20:03:32 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1648715 00:04:45.484 20:03:32 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:45.484 20:03:32 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:45.484 20:03:32 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1648715' 00:04:45.484 killing process with pid 1648715 00:04:45.484 20:03:32 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 1648715 00:04:45.484 20:03:32 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 1648715 00:04:45.743 00:04:45.743 real 0m1.411s 00:04:45.743 user 0m1.499s 00:04:45.743 sys 0m0.389s 00:04:45.743 20:03:32 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:45.743 20:03:32 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:45.743 ************************************ 00:04:45.743 END TEST dpdk_mem_utility 00:04:45.743 ************************************ 00:04:45.743 20:03:32 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:04:45.743 20:03:32 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:45.743 20:03:32 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:45.743 20:03:32 -- common/autotest_common.sh@10 -- # set +x 00:04:45.743 ************************************ 00:04:45.743 START TEST event 00:04:45.743 ************************************ 00:04:45.743 20:03:32 event -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:04:45.743 * Looking for test storage... 00:04:45.743 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:04:45.743 20:03:32 event -- event/event.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:45.743 20:03:32 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:45.744 20:03:32 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:45.744 20:03:32 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:04:45.744 20:03:32 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:45.744 20:03:32 event -- common/autotest_common.sh@10 -- # set +x 00:04:46.002 ************************************ 00:04:46.002 START TEST event_perf 00:04:46.002 ************************************ 00:04:46.003 20:03:32 event.event_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:46.003 Running I/O for 1 seconds...[2024-05-16 20:03:32.928478] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:04:46.003 [2024-05-16 20:03:32.928550] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1648993 ] 00:04:46.003 EAL: No free 2048 kB hugepages reported on node 1 00:04:46.003 [2024-05-16 20:03:32.986299] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:46.003 [2024-05-16 20:03:33.064815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:46.003 [2024-05-16 20:03:33.064910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:46.003 [2024-05-16 20:03:33.065011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:46.003 [2024-05-16 20:03:33.065012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.380 Running I/O for 1 seconds... 00:04:47.380 lcore 0: 196150 00:04:47.380 lcore 1: 196150 00:04:47.380 lcore 2: 196150 00:04:47.380 lcore 3: 196150 00:04:47.380 done. 00:04:47.380 00:04:47.380 real 0m1.226s 00:04:47.380 user 0m4.137s 00:04:47.380 sys 0m0.084s 00:04:47.380 20:03:34 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:47.380 20:03:34 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:47.380 ************************************ 00:04:47.380 END TEST event_perf 00:04:47.380 ************************************ 00:04:47.380 20:03:34 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:47.380 20:03:34 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:04:47.380 20:03:34 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:47.380 20:03:34 event -- common/autotest_common.sh@10 -- # set +x 00:04:47.380 ************************************ 00:04:47.380 START TEST event_reactor 00:04:47.380 ************************************ 00:04:47.380 20:03:34 event.event_reactor -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:47.380 [2024-05-16 20:03:34.230854] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:04:47.380 [2024-05-16 20:03:34.230957] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1649240 ] 00:04:47.380 EAL: No free 2048 kB hugepages reported on node 1 00:04:47.380 [2024-05-16 20:03:34.290064] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.380 [2024-05-16 20:03:34.371609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.433 test_start 00:04:48.433 oneshot 00:04:48.433 tick 100 00:04:48.433 tick 100 00:04:48.433 tick 250 00:04:48.433 tick 100 00:04:48.433 tick 100 00:04:48.433 tick 100 00:04:48.433 tick 250 00:04:48.433 tick 500 00:04:48.433 tick 100 00:04:48.433 tick 100 00:04:48.433 tick 250 00:04:48.433 tick 100 00:04:48.433 tick 100 00:04:48.433 test_end 00:04:48.433 00:04:48.433 real 0m1.227s 00:04:48.433 user 0m1.143s 00:04:48.433 sys 0m0.080s 00:04:48.433 20:03:35 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:48.433 20:03:35 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:48.433 ************************************ 00:04:48.433 END TEST event_reactor 00:04:48.433 ************************************ 00:04:48.433 20:03:35 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:48.433 20:03:35 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:04:48.433 20:03:35 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:48.433 20:03:35 event -- common/autotest_common.sh@10 -- # set +x 00:04:48.433 ************************************ 00:04:48.433 START TEST event_reactor_perf 00:04:48.433 ************************************ 00:04:48.433 20:03:35 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:48.433 [2024-05-16 20:03:35.534837] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:04:48.433 [2024-05-16 20:03:35.534942] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1649480 ] 00:04:48.433 EAL: No free 2048 kB hugepages reported on node 1 00:04:48.692 [2024-05-16 20:03:35.595893] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.692 [2024-05-16 20:03:35.676193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.629 test_start 00:04:49.629 test_end 00:04:49.629 Performance: 927957 events per second 00:04:49.629 00:04:49.629 real 0m1.228s 00:04:49.629 user 0m1.146s 00:04:49.629 sys 0m0.077s 00:04:49.629 20:03:36 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:49.629 20:03:36 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:49.629 ************************************ 00:04:49.629 END TEST event_reactor_perf 00:04:49.629 ************************************ 00:04:49.889 20:03:36 event -- event/event.sh@49 -- # uname -s 00:04:49.889 20:03:36 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:49.889 20:03:36 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:49.889 20:03:36 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:49.889 20:03:36 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:49.889 20:03:36 event -- common/autotest_common.sh@10 -- # set +x 00:04:49.889 ************************************ 00:04:49.889 START TEST event_scheduler 00:04:49.889 ************************************ 00:04:49.889 20:03:36 event.event_scheduler -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:49.889 * Looking for test storage... 00:04:49.889 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler 00:04:49.889 20:03:36 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:49.889 20:03:36 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1649749 00:04:49.889 20:03:36 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:49.889 20:03:36 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:49.889 20:03:36 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1649749 00:04:49.889 20:03:36 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 1649749 ']' 00:04:49.889 20:03:36 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.889 20:03:36 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:49.889 20:03:36 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.889 20:03:36 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:49.889 20:03:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:49.889 [2024-05-16 20:03:36.928382] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:04:49.889 [2024-05-16 20:03:36.928476] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1649749 ] 00:04:49.889 EAL: No free 2048 kB hugepages reported on node 1 00:04:49.889 [2024-05-16 20:03:36.977501] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:50.149 [2024-05-16 20:03:37.057589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.149 [2024-05-16 20:03:37.057656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:50.149 [2024-05-16 20:03:37.057763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:50.149 [2024-05-16 20:03:37.057765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:50.149 20:03:37 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:50.149 20:03:37 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:04:50.149 20:03:37 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:50.149 20:03:37 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.149 20:03:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:50.149 POWER: Env isn't set yet! 00:04:50.149 POWER: Attempting to initialise ACPI cpufreq power management... 00:04:50.149 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:50.149 POWER: Cannot set governor of lcore 0 to userspace 00:04:50.149 POWER: Attempting to initialise PSTAT power management... 00:04:50.149 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:04:50.149 POWER: Initialized successfully for lcore 0 power management 00:04:50.149 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:04:50.149 POWER: Initialized successfully for lcore 1 power management 00:04:50.149 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:04:50.149 POWER: Initialized successfully for lcore 2 power management 00:04:50.149 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:04:50.149 POWER: Initialized successfully for lcore 3 power management 00:04:50.149 20:03:37 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.149 20:03:37 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:50.149 20:03:37 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.149 20:03:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:50.149 [2024-05-16 20:03:37.221321] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:50.149 20:03:37 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.149 20:03:37 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:50.149 20:03:37 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:50.149 20:03:37 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:50.149 20:03:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:50.149 ************************************ 00:04:50.149 START TEST scheduler_create_thread 00:04:50.149 ************************************ 00:04:50.149 20:03:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:04:50.149 20:03:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:50.149 20:03:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.149 20:03:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.149 2 00:04:50.149 20:03:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.149 20:03:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:50.149 20:03:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.149 20:03:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.149 3 00:04:50.149 20:03:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.149 20:03:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:50.149 20:03:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.149 20:03:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.149 4 00:04:50.149 20:03:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.149 20:03:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:50.149 20:03:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.149 20:03:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.408 5 00:04:50.408 20:03:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.408 20:03:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:50.408 20:03:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.408 20:03:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.408 6 00:04:50.408 20:03:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.408 20:03:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:50.408 20:03:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.408 20:03:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.408 7 00:04:50.408 20:03:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.408 20:03:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:50.408 20:03:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.408 20:03:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.408 8 00:04:50.408 20:03:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.408 20:03:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:50.408 20:03:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.408 20:03:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.408 9 00:04:50.408 20:03:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.408 20:03:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:50.408 20:03:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.408 20:03:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.408 10 00:04:50.408 20:03:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.408 20:03:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:50.408 20:03:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.408 20:03:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.786 20:03:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.786 20:03:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:51.786 20:03:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:51.786 20:03:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.786 20:03:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:52.723 20:03:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:52.723 20:03:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:52.723 20:03:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:52.723 20:03:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.291 20:03:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:53.291 20:03:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:53.291 20:03:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:53.291 20:03:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:53.291 20:03:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.226 20:03:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:54.226 00:04:54.226 real 0m3.892s 00:04:54.226 user 0m0.022s 00:04:54.226 sys 0m0.006s 00:04:54.226 20:03:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:54.226 20:03:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.226 ************************************ 00:04:54.226 END TEST scheduler_create_thread 00:04:54.226 ************************************ 00:04:54.226 20:03:41 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:54.226 20:03:41 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1649749 00:04:54.226 20:03:41 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 1649749 ']' 00:04:54.226 20:03:41 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 1649749 00:04:54.226 20:03:41 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:04:54.226 20:03:41 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:54.226 20:03:41 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1649749 00:04:54.226 20:03:41 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:04:54.226 20:03:41 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:04:54.226 20:03:41 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1649749' 00:04:54.226 killing process with pid 1649749 00:04:54.226 20:03:41 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 1649749 00:04:54.226 20:03:41 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 1649749 00:04:54.484 [2024-05-16 20:03:41.533178] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:54.743 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:04:54.743 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:04:54.743 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:04:54.743 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:04:54.743 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:04:54.743 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:04:54.743 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:04:54.743 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:04:54.743 00:04:54.743 real 0m4.975s 00:04:54.743 user 0m9.517s 00:04:54.743 sys 0m0.342s 00:04:54.743 20:03:41 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:54.743 20:03:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:54.743 ************************************ 00:04:54.743 END TEST event_scheduler 00:04:54.743 ************************************ 00:04:54.743 20:03:41 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:54.743 20:03:41 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:54.743 20:03:41 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:54.743 20:03:41 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:54.743 20:03:41 event -- common/autotest_common.sh@10 -- # set +x 00:04:54.743 ************************************ 00:04:54.743 START TEST app_repeat 00:04:54.743 ************************************ 00:04:54.743 20:03:41 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:04:54.743 20:03:41 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.743 20:03:41 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.743 20:03:41 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:54.743 20:03:41 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:54.743 20:03:41 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:54.743 20:03:41 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:54.743 20:03:41 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:55.001 20:03:41 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1650505 00:04:55.001 20:03:41 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:55.001 20:03:41 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:55.001 20:03:41 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1650505' 00:04:55.001 Process app_repeat pid: 1650505 00:04:55.001 20:03:41 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:55.001 20:03:41 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:55.001 spdk_app_start Round 0 00:04:55.001 20:03:41 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1650505 /var/tmp/spdk-nbd.sock 00:04:55.001 20:03:41 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 1650505 ']' 00:04:55.001 20:03:41 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:55.001 20:03:41 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:55.001 20:03:41 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:55.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:55.001 20:03:41 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:55.001 20:03:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:55.001 [2024-05-16 20:03:41.909399] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:04:55.001 [2024-05-16 20:03:41.909499] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1650505 ] 00:04:55.001 EAL: No free 2048 kB hugepages reported on node 1 00:04:55.001 [2024-05-16 20:03:41.965480] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:55.001 [2024-05-16 20:03:42.041556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:55.001 [2024-05-16 20:03:42.041559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.001 20:03:42 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:55.001 20:03:42 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:04:55.001 20:03:42 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:55.259 Malloc0 00:04:55.259 20:03:42 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:55.518 Malloc1 00:04:55.518 20:03:42 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:55.518 20:03:42 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.518 20:03:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:55.518 20:03:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:55.518 20:03:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.518 20:03:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:55.518 20:03:42 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:55.518 20:03:42 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.518 20:03:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:55.518 20:03:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:55.518 20:03:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.518 20:03:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:55.518 20:03:42 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:55.518 20:03:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:55.518 20:03:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:55.518 20:03:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:55.518 /dev/nbd0 00:04:55.518 20:03:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:55.776 20:03:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:55.776 20:03:42 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:04:55.776 20:03:42 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:04:55.776 20:03:42 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:04:55.776 20:03:42 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:04:55.776 20:03:42 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:04:55.776 20:03:42 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:04:55.776 20:03:42 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:04:55.776 20:03:42 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:04:55.776 20:03:42 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:55.776 1+0 records in 00:04:55.776 1+0 records out 00:04:55.776 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000179095 s, 22.9 MB/s 00:04:55.776 20:03:42 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:04:55.776 20:03:42 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:04:55.776 20:03:42 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:04:55.776 20:03:42 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:04:55.776 20:03:42 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:04:55.776 20:03:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:55.776 20:03:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:55.776 20:03:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:55.776 /dev/nbd1 00:04:55.776 20:03:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:55.776 20:03:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:55.776 20:03:42 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:04:55.776 20:03:42 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:04:55.776 20:03:42 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:04:55.776 20:03:42 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:04:55.776 20:03:42 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:04:55.776 20:03:42 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:04:55.776 20:03:42 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:04:55.776 20:03:42 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:04:55.776 20:03:42 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:55.776 1+0 records in 00:04:55.776 1+0 records out 00:04:55.776 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000232016 s, 17.7 MB/s 00:04:55.776 20:03:42 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:04:55.776 20:03:42 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:04:55.776 20:03:42 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:04:55.776 20:03:42 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:04:55.776 20:03:42 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:04:55.776 20:03:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:55.776 20:03:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:55.776 20:03:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:55.776 20:03:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.776 20:03:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:56.033 20:03:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:56.033 { 00:04:56.033 "nbd_device": "/dev/nbd0", 00:04:56.033 "bdev_name": "Malloc0" 00:04:56.033 }, 00:04:56.033 { 00:04:56.033 "nbd_device": "/dev/nbd1", 00:04:56.033 "bdev_name": "Malloc1" 00:04:56.033 } 00:04:56.033 ]' 00:04:56.033 20:03:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:56.034 { 00:04:56.034 "nbd_device": "/dev/nbd0", 00:04:56.034 "bdev_name": "Malloc0" 00:04:56.034 }, 00:04:56.034 { 00:04:56.034 "nbd_device": "/dev/nbd1", 00:04:56.034 "bdev_name": "Malloc1" 00:04:56.034 } 00:04:56.034 ]' 00:04:56.034 20:03:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:56.034 20:03:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:56.034 /dev/nbd1' 00:04:56.034 20:03:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:56.034 /dev/nbd1' 00:04:56.034 20:03:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:56.034 20:03:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:56.034 20:03:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:56.034 20:03:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:56.034 20:03:43 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:56.034 20:03:43 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:56.034 20:03:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.034 20:03:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:56.034 20:03:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:56.034 20:03:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:04:56.034 20:03:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:56.034 20:03:43 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:56.034 256+0 records in 00:04:56.034 256+0 records out 00:04:56.034 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010397 s, 101 MB/s 00:04:56.034 20:03:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:56.034 20:03:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:56.034 256+0 records in 00:04:56.034 256+0 records out 00:04:56.034 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0140173 s, 74.8 MB/s 00:04:56.034 20:03:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:56.034 20:03:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:56.034 256+0 records in 00:04:56.034 256+0 records out 00:04:56.034 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0158688 s, 66.1 MB/s 00:04:56.034 20:03:43 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:56.034 20:03:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.034 20:03:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:56.034 20:03:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:56.034 20:03:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:04:56.034 20:03:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:56.034 20:03:43 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:56.034 20:03:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:56.034 20:03:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:56.034 20:03:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:56.034 20:03:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:56.034 20:03:43 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:04:56.034 20:03:43 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:56.034 20:03:43 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.034 20:03:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.034 20:03:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:56.034 20:03:43 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:56.034 20:03:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:56.034 20:03:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:56.292 20:03:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:56.292 20:03:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:56.292 20:03:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:56.292 20:03:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:56.292 20:03:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:56.292 20:03:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:56.292 20:03:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:56.292 20:03:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:56.292 20:03:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:56.292 20:03:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:56.551 20:03:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:56.551 20:03:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:56.551 20:03:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:56.551 20:03:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:56.551 20:03:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:56.551 20:03:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:56.551 20:03:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:56.551 20:03:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:56.551 20:03:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:56.551 20:03:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.551 20:03:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:56.809 20:03:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:56.809 20:03:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:56.809 20:03:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:56.809 20:03:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:56.809 20:03:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:56.809 20:03:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:56.809 20:03:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:56.809 20:03:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:56.809 20:03:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:56.809 20:03:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:56.809 20:03:43 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:56.809 20:03:43 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:56.809 20:03:43 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:57.068 20:03:44 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:57.068 [2024-05-16 20:03:44.188168] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:57.327 [2024-05-16 20:03:44.262783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:57.327 [2024-05-16 20:03:44.262784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.327 [2024-05-16 20:03:44.309947] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:57.327 [2024-05-16 20:03:44.309992] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:59.859 20:03:47 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:59.859 20:03:47 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:59.859 spdk_app_start Round 1 00:05:00.118 20:03:47 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1650505 /var/tmp/spdk-nbd.sock 00:05:00.118 20:03:47 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 1650505 ']' 00:05:00.118 20:03:47 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:00.118 20:03:47 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:00.118 20:03:47 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:00.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:00.118 20:03:47 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:00.118 20:03:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:00.118 20:03:47 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:00.118 20:03:47 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:05:00.118 20:03:47 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:00.378 Malloc0 00:05:00.378 20:03:47 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:00.378 Malloc1 00:05:00.637 20:03:47 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:00.637 20:03:47 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.637 20:03:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:00.637 20:03:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:00.637 20:03:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.637 20:03:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:00.637 20:03:47 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:00.637 20:03:47 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.637 20:03:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:00.637 20:03:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:00.637 20:03:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.637 20:03:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:00.637 20:03:47 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:00.637 20:03:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:00.637 20:03:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:00.637 20:03:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:00.637 /dev/nbd0 00:05:00.637 20:03:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:00.637 20:03:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:00.637 20:03:47 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:05:00.637 20:03:47 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:00.637 20:03:47 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:00.637 20:03:47 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:00.637 20:03:47 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:05:00.637 20:03:47 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:00.637 20:03:47 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:00.637 20:03:47 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:00.637 20:03:47 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:00.637 1+0 records in 00:05:00.637 1+0 records out 00:05:00.637 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236773 s, 17.3 MB/s 00:05:00.637 20:03:47 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:00.637 20:03:47 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:00.637 20:03:47 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:00.637 20:03:47 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:00.637 20:03:47 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:00.637 20:03:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:00.637 20:03:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:00.637 20:03:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:00.896 /dev/nbd1 00:05:00.896 20:03:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:00.896 20:03:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:00.897 20:03:47 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:05:00.897 20:03:47 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:00.897 20:03:47 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:00.897 20:03:47 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:00.897 20:03:47 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:05:00.897 20:03:47 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:00.897 20:03:47 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:00.897 20:03:47 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:00.897 20:03:47 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:00.897 1+0 records in 00:05:00.897 1+0 records out 00:05:00.897 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000196402 s, 20.9 MB/s 00:05:00.897 20:03:47 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:00.897 20:03:47 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:00.897 20:03:47 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:00.897 20:03:47 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:00.897 20:03:47 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:00.897 20:03:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:00.897 20:03:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:00.897 20:03:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:00.897 20:03:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.897 20:03:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:01.156 20:03:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:01.156 { 00:05:01.156 "nbd_device": "/dev/nbd0", 00:05:01.156 "bdev_name": "Malloc0" 00:05:01.156 }, 00:05:01.156 { 00:05:01.156 "nbd_device": "/dev/nbd1", 00:05:01.156 "bdev_name": "Malloc1" 00:05:01.156 } 00:05:01.156 ]' 00:05:01.156 20:03:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:01.156 20:03:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:01.156 { 00:05:01.156 "nbd_device": "/dev/nbd0", 00:05:01.156 "bdev_name": "Malloc0" 00:05:01.156 }, 00:05:01.156 { 00:05:01.156 "nbd_device": "/dev/nbd1", 00:05:01.156 "bdev_name": "Malloc1" 00:05:01.156 } 00:05:01.156 ]' 00:05:01.156 20:03:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:01.156 /dev/nbd1' 00:05:01.156 20:03:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:01.156 20:03:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:01.156 /dev/nbd1' 00:05:01.156 20:03:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:01.156 20:03:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:01.156 20:03:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:01.156 20:03:48 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:01.156 20:03:48 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:01.156 20:03:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.156 20:03:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:01.156 20:03:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:01.156 20:03:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:01.156 20:03:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:01.156 20:03:48 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:01.156 256+0 records in 00:05:01.156 256+0 records out 00:05:01.156 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104105 s, 101 MB/s 00:05:01.156 20:03:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:01.156 20:03:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:01.156 256+0 records in 00:05:01.156 256+0 records out 00:05:01.156 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146955 s, 71.4 MB/s 00:05:01.156 20:03:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:01.156 20:03:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:01.156 256+0 records in 00:05:01.156 256+0 records out 00:05:01.156 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0152141 s, 68.9 MB/s 00:05:01.156 20:03:48 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:01.156 20:03:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.156 20:03:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:01.156 20:03:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:01.156 20:03:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:01.156 20:03:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:01.156 20:03:48 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:01.156 20:03:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:01.156 20:03:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:01.156 20:03:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:01.156 20:03:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:01.156 20:03:48 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:01.157 20:03:48 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:01.157 20:03:48 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.157 20:03:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.157 20:03:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:01.157 20:03:48 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:01.157 20:03:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:01.157 20:03:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:01.416 20:03:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:01.416 20:03:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:01.416 20:03:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:01.416 20:03:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:01.416 20:03:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:01.416 20:03:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:01.416 20:03:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:01.416 20:03:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:01.416 20:03:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:01.416 20:03:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:01.677 20:03:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:01.677 20:03:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:01.677 20:03:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:01.677 20:03:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:01.677 20:03:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:01.677 20:03:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:01.677 20:03:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:01.677 20:03:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:01.677 20:03:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:01.677 20:03:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.677 20:03:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:01.677 20:03:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:01.677 20:03:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:01.677 20:03:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:01.936 20:03:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:01.936 20:03:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:01.936 20:03:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:01.936 20:03:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:01.936 20:03:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:01.936 20:03:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:01.936 20:03:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:01.936 20:03:48 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:01.936 20:03:48 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:01.936 20:03:48 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:01.936 20:03:49 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:02.195 [2024-05-16 20:03:49.244357] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:02.195 [2024-05-16 20:03:49.317040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.195 [2024-05-16 20:03:49.317040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.454 [2024-05-16 20:03:49.364716] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:02.454 [2024-05-16 20:03:49.364763] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:04.991 20:03:52 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:04.991 20:03:52 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:04.991 spdk_app_start Round 2 00:05:04.991 20:03:52 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1650505 /var/tmp/spdk-nbd.sock 00:05:04.991 20:03:52 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 1650505 ']' 00:05:04.991 20:03:52 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:04.991 20:03:52 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:04.991 20:03:52 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:04.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:04.991 20:03:52 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:04.991 20:03:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:05.249 20:03:52 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:05.249 20:03:52 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:05:05.249 20:03:52 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:05.249 Malloc0 00:05:05.508 20:03:52 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:05.508 Malloc1 00:05:05.508 20:03:52 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:05.508 20:03:52 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.508 20:03:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:05.508 20:03:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:05.508 20:03:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.508 20:03:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:05.508 20:03:52 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:05.508 20:03:52 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.508 20:03:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:05.508 20:03:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:05.508 20:03:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.508 20:03:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:05.508 20:03:52 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:05.508 20:03:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:05.508 20:03:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.508 20:03:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:05.767 /dev/nbd0 00:05:05.767 20:03:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:05.767 20:03:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:05.767 20:03:52 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:05:05.767 20:03:52 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:05.767 20:03:52 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:05.767 20:03:52 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:05.767 20:03:52 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:05:05.767 20:03:52 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:05.767 20:03:52 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:05.767 20:03:52 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:05.767 20:03:52 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:05.767 1+0 records in 00:05:05.767 1+0 records out 00:05:05.767 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000226354 s, 18.1 MB/s 00:05:05.767 20:03:52 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:05.767 20:03:52 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:05.767 20:03:52 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:05.767 20:03:52 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:05.767 20:03:52 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:05.767 20:03:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:05.767 20:03:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.767 20:03:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:06.026 /dev/nbd1 00:05:06.026 20:03:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:06.026 20:03:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:06.026 20:03:52 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:05:06.026 20:03:52 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:06.026 20:03:52 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:06.026 20:03:52 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:06.026 20:03:52 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:05:06.026 20:03:52 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:06.026 20:03:52 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:06.026 20:03:52 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:06.026 20:03:52 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:06.026 1+0 records in 00:05:06.026 1+0 records out 00:05:06.026 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000243058 s, 16.9 MB/s 00:05:06.026 20:03:52 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:06.026 20:03:52 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:06.026 20:03:52 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:06.026 20:03:52 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:06.026 20:03:52 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:06.026 20:03:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:06.026 20:03:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:06.026 20:03:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:06.026 20:03:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.026 20:03:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:06.026 20:03:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:06.026 { 00:05:06.026 "nbd_device": "/dev/nbd0", 00:05:06.026 "bdev_name": "Malloc0" 00:05:06.026 }, 00:05:06.026 { 00:05:06.026 "nbd_device": "/dev/nbd1", 00:05:06.026 "bdev_name": "Malloc1" 00:05:06.026 } 00:05:06.026 ]' 00:05:06.026 20:03:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:06.026 { 00:05:06.026 "nbd_device": "/dev/nbd0", 00:05:06.026 "bdev_name": "Malloc0" 00:05:06.026 }, 00:05:06.026 { 00:05:06.026 "nbd_device": "/dev/nbd1", 00:05:06.026 "bdev_name": "Malloc1" 00:05:06.026 } 00:05:06.026 ]' 00:05:06.026 20:03:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:06.285 20:03:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:06.285 /dev/nbd1' 00:05:06.285 20:03:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:06.285 /dev/nbd1' 00:05:06.285 20:03:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:06.285 20:03:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:06.285 20:03:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:06.285 20:03:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:06.285 20:03:53 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:06.285 20:03:53 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:06.285 20:03:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.285 20:03:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:06.285 20:03:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:06.285 20:03:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:06.285 20:03:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:06.285 20:03:53 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:06.285 256+0 records in 00:05:06.285 256+0 records out 00:05:06.285 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0101449 s, 103 MB/s 00:05:06.285 20:03:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:06.285 20:03:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:06.285 256+0 records in 00:05:06.285 256+0 records out 00:05:06.285 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0148694 s, 70.5 MB/s 00:05:06.285 20:03:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:06.285 20:03:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:06.285 256+0 records in 00:05:06.285 256+0 records out 00:05:06.285 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0160174 s, 65.5 MB/s 00:05:06.285 20:03:53 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:06.285 20:03:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.285 20:03:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:06.285 20:03:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:06.285 20:03:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:06.285 20:03:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:06.285 20:03:53 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:06.285 20:03:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:06.285 20:03:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:06.285 20:03:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:06.285 20:03:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:06.285 20:03:53 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:06.285 20:03:53 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:06.285 20:03:53 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.285 20:03:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.285 20:03:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:06.285 20:03:53 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:06.285 20:03:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:06.285 20:03:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:06.544 20:03:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:06.544 20:03:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:06.544 20:03:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:06.544 20:03:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:06.544 20:03:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:06.544 20:03:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:06.544 20:03:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:06.544 20:03:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:06.544 20:03:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:06.544 20:03:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:06.544 20:03:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:06.544 20:03:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:06.544 20:03:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:06.544 20:03:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:06.544 20:03:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:06.544 20:03:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:06.544 20:03:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:06.544 20:03:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:06.544 20:03:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:06.544 20:03:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.804 20:03:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:06.804 20:03:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:06.804 20:03:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:06.804 20:03:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:06.804 20:03:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:06.804 20:03:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:06.804 20:03:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:06.804 20:03:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:06.804 20:03:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:06.804 20:03:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:06.804 20:03:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:06.804 20:03:53 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:06.804 20:03:53 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:06.804 20:03:53 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:07.062 20:03:54 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:07.321 [2024-05-16 20:03:54.280080] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:07.321 [2024-05-16 20:03:54.353715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:07.321 [2024-05-16 20:03:54.353716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.321 [2024-05-16 20:03:54.400892] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:07.321 [2024-05-16 20:03:54.400956] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:10.609 20:03:57 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1650505 /var/tmp/spdk-nbd.sock 00:05:10.609 20:03:57 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 1650505 ']' 00:05:10.609 20:03:57 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:10.609 20:03:57 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:10.609 20:03:57 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:10.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:10.609 20:03:57 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:10.609 20:03:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:10.609 20:03:57 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:10.609 20:03:57 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:05:10.609 20:03:57 event.app_repeat -- event/event.sh@39 -- # killprocess 1650505 00:05:10.609 20:03:57 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 1650505 ']' 00:05:10.609 20:03:57 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 1650505 00:05:10.609 20:03:57 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:05:10.609 20:03:57 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:10.609 20:03:57 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1650505 00:05:10.609 20:03:57 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:10.609 20:03:57 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:10.609 20:03:57 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1650505' 00:05:10.609 killing process with pid 1650505 00:05:10.609 20:03:57 event.app_repeat -- common/autotest_common.sh@965 -- # kill 1650505 00:05:10.609 20:03:57 event.app_repeat -- common/autotest_common.sh@970 -- # wait 1650505 00:05:10.609 spdk_app_start is called in Round 0. 00:05:10.609 Shutdown signal received, stop current app iteration 00:05:10.609 Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 reinitialization... 00:05:10.609 spdk_app_start is called in Round 1. 00:05:10.609 Shutdown signal received, stop current app iteration 00:05:10.609 Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 reinitialization... 00:05:10.609 spdk_app_start is called in Round 2. 00:05:10.609 Shutdown signal received, stop current app iteration 00:05:10.609 Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 reinitialization... 00:05:10.609 spdk_app_start is called in Round 3. 00:05:10.609 Shutdown signal received, stop current app iteration 00:05:10.609 20:03:57 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:10.609 20:03:57 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:10.609 00:05:10.609 real 0m15.600s 00:05:10.609 user 0m33.483s 00:05:10.609 sys 0m2.614s 00:05:10.609 20:03:57 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:10.609 20:03:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:10.609 ************************************ 00:05:10.609 END TEST app_repeat 00:05:10.609 ************************************ 00:05:10.609 20:03:57 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:10.609 20:03:57 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:10.609 20:03:57 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:10.609 20:03:57 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:10.609 20:03:57 event -- common/autotest_common.sh@10 -- # set +x 00:05:10.609 ************************************ 00:05:10.609 START TEST cpu_locks 00:05:10.609 ************************************ 00:05:10.609 20:03:57 event.cpu_locks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:10.609 * Looking for test storage... 00:05:10.609 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:05:10.609 20:03:57 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:10.609 20:03:57 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:10.609 20:03:57 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:10.609 20:03:57 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:10.609 20:03:57 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:10.609 20:03:57 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:10.609 20:03:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:10.609 ************************************ 00:05:10.609 START TEST default_locks 00:05:10.609 ************************************ 00:05:10.609 20:03:57 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:05:10.609 20:03:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1653345 00:05:10.609 20:03:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1653345 00:05:10.609 20:03:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:10.609 20:03:57 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 1653345 ']' 00:05:10.609 20:03:57 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.609 20:03:57 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:10.609 20:03:57 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.609 20:03:57 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:10.609 20:03:57 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:10.609 [2024-05-16 20:03:57.717763] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:10.609 [2024-05-16 20:03:57.717838] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1653345 ] 00:05:10.609 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.868 [2024-05-16 20:03:57.769749] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.868 [2024-05-16 20:03:57.843724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.126 20:03:58 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:11.126 20:03:58 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:05:11.126 20:03:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1653345 00:05:11.126 20:03:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1653345 00:05:11.126 20:03:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:11.126 lslocks: write error 00:05:11.126 20:03:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1653345 00:05:11.126 20:03:58 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 1653345 ']' 00:05:11.126 20:03:58 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 1653345 00:05:11.126 20:03:58 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:05:11.126 20:03:58 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:11.126 20:03:58 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1653345 00:05:11.385 20:03:58 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:11.385 20:03:58 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:11.385 20:03:58 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1653345' 00:05:11.385 killing process with pid 1653345 00:05:11.385 20:03:58 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 1653345 00:05:11.385 20:03:58 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 1653345 00:05:11.644 20:03:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1653345 00:05:11.644 20:03:58 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:11.644 20:03:58 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1653345 00:05:11.644 20:03:58 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:11.644 20:03:58 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:11.644 20:03:58 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:11.644 20:03:58 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:11.644 20:03:58 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 1653345 00:05:11.644 20:03:58 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 1653345 ']' 00:05:11.644 20:03:58 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.644 20:03:58 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:11.644 20:03:58 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.644 20:03:58 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:11.644 20:03:58 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:11.644 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (1653345) - No such process 00:05:11.644 ERROR: process (pid: 1653345) is no longer running 00:05:11.644 20:03:58 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:11.644 20:03:58 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:05:11.644 20:03:58 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:11.645 20:03:58 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:11.645 20:03:58 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:11.645 20:03:58 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:11.645 20:03:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:11.645 20:03:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:11.645 20:03:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:11.645 20:03:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:11.645 00:05:11.645 real 0m0.945s 00:05:11.645 user 0m0.871s 00:05:11.645 sys 0m0.419s 00:05:11.645 20:03:58 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:11.645 20:03:58 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:11.645 ************************************ 00:05:11.645 END TEST default_locks 00:05:11.645 ************************************ 00:05:11.645 20:03:58 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:11.645 20:03:58 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:11.645 20:03:58 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:11.645 20:03:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:11.645 ************************************ 00:05:11.645 START TEST default_locks_via_rpc 00:05:11.645 ************************************ 00:05:11.645 20:03:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:05:11.645 20:03:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:11.645 20:03:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1653591 00:05:11.645 20:03:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1653591 00:05:11.645 20:03:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 1653591 ']' 00:05:11.645 20:03:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.645 20:03:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:11.645 20:03:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.645 20:03:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:11.645 20:03:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.645 [2024-05-16 20:03:58.724870] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:11.645 [2024-05-16 20:03:58.724907] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1653591 ] 00:05:11.645 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.645 [2024-05-16 20:03:58.776410] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.904 [2024-05-16 20:03:58.860693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.162 20:03:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:12.162 20:03:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:12.162 20:03:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:12.162 20:03:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.162 20:03:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.162 20:03:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.162 20:03:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:12.162 20:03:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:12.162 20:03:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:12.162 20:03:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:12.162 20:03:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:12.162 20:03:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.162 20:03:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.162 20:03:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.162 20:03:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1653591 00:05:12.162 20:03:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1653591 00:05:12.162 20:03:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:12.162 20:03:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1653591 00:05:12.162 20:03:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 1653591 ']' 00:05:12.162 20:03:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 1653591 00:05:12.162 20:03:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:05:12.162 20:03:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:12.162 20:03:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1653591 00:05:12.422 20:03:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:12.422 20:03:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:12.422 20:03:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1653591' 00:05:12.422 killing process with pid 1653591 00:05:12.422 20:03:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 1653591 00:05:12.422 20:03:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 1653591 00:05:12.680 00:05:12.680 real 0m0.932s 00:05:12.680 user 0m0.873s 00:05:12.680 sys 0m0.420s 00:05:12.680 20:03:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:12.680 20:03:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.680 ************************************ 00:05:12.680 END TEST default_locks_via_rpc 00:05:12.680 ************************************ 00:05:12.680 20:03:59 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:12.680 20:03:59 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:12.680 20:03:59 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:12.680 20:03:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:12.681 ************************************ 00:05:12.681 START TEST non_locking_app_on_locked_coremask 00:05:12.681 ************************************ 00:05:12.681 20:03:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:05:12.681 20:03:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:12.681 20:03:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1653647 00:05:12.681 20:03:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1653647 /var/tmp/spdk.sock 00:05:12.681 20:03:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 1653647 ']' 00:05:12.681 20:03:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.681 20:03:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:12.681 20:03:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.681 20:03:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:12.681 20:03:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:12.681 [2024-05-16 20:03:59.727917] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:12.681 [2024-05-16 20:03:59.727953] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1653647 ] 00:05:12.681 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.681 [2024-05-16 20:03:59.778451] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.940 [2024-05-16 20:03:59.862237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.940 20:04:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:12.940 20:04:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:13.202 20:04:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1653842 00:05:13.202 20:04:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1653842 /var/tmp/spdk2.sock 00:05:13.202 20:04:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 1653842 ']' 00:05:13.202 20:04:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:13.202 20:04:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:13.202 20:04:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:13.202 20:04:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:13.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:13.202 20:04:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:13.202 20:04:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:13.202 [2024-05-16 20:04:00.109086] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:13.202 [2024-05-16 20:04:00.109157] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1653842 ] 00:05:13.202 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.202 [2024-05-16 20:04:00.182735] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:13.202 [2024-05-16 20:04:00.182766] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.463 [2024-05-16 20:04:00.349340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.031 20:04:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:14.031 20:04:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:14.031 20:04:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1653647 00:05:14.031 20:04:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1653647 00:05:14.031 20:04:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:14.289 lslocks: write error 00:05:14.289 20:04:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1653647 00:05:14.289 20:04:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 1653647 ']' 00:05:14.289 20:04:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 1653647 00:05:14.289 20:04:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:05:14.290 20:04:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:14.290 20:04:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1653647 00:05:14.290 20:04:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:14.290 20:04:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:14.290 20:04:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1653647' 00:05:14.290 killing process with pid 1653647 00:05:14.290 20:04:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 1653647 00:05:14.290 20:04:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 1653647 00:05:14.888 20:04:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1653842 00:05:14.888 20:04:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 1653842 ']' 00:05:14.888 20:04:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 1653842 00:05:14.888 20:04:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:05:14.888 20:04:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:14.888 20:04:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1653842 00:05:14.888 20:04:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:14.888 20:04:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:14.888 20:04:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1653842' 00:05:14.888 killing process with pid 1653842 00:05:14.888 20:04:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 1653842 00:05:14.888 20:04:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 1653842 00:05:15.175 00:05:15.175 real 0m2.585s 00:05:15.175 user 0m2.666s 00:05:15.175 sys 0m0.811s 00:05:15.175 20:04:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:15.175 20:04:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:15.175 ************************************ 00:05:15.175 END TEST non_locking_app_on_locked_coremask 00:05:15.175 ************************************ 00:05:15.434 20:04:02 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:15.434 20:04:02 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:15.434 20:04:02 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:15.434 20:04:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:15.434 ************************************ 00:05:15.434 START TEST locking_app_on_unlocked_coremask 00:05:15.434 ************************************ 00:05:15.434 20:04:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:05:15.434 20:04:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1654113 00:05:15.434 20:04:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1654113 /var/tmp/spdk.sock 00:05:15.434 20:04:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:15.434 20:04:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 1654113 ']' 00:05:15.434 20:04:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.434 20:04:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:15.434 20:04:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.434 20:04:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:15.434 20:04:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:15.434 [2024-05-16 20:04:02.399680] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:15.434 [2024-05-16 20:04:02.399752] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1654113 ] 00:05:15.434 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.434 [2024-05-16 20:04:02.455386] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:15.434 [2024-05-16 20:04:02.455421] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.434 [2024-05-16 20:04:02.539977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.372 20:04:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:16.372 20:04:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:16.372 20:04:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1654333 00:05:16.372 20:04:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1654333 /var/tmp/spdk2.sock 00:05:16.372 20:04:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:16.372 20:04:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 1654333 ']' 00:05:16.372 20:04:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:16.372 20:04:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:16.372 20:04:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:16.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:16.372 20:04:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:16.372 20:04:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:16.372 [2024-05-16 20:04:03.238970] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:16.373 [2024-05-16 20:04:03.239044] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1654333 ] 00:05:16.373 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.373 [2024-05-16 20:04:03.309596] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.373 [2024-05-16 20:04:03.454779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.942 20:04:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:16.942 20:04:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:16.942 20:04:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1654333 00:05:16.942 20:04:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1654333 00:05:16.942 20:04:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:17.511 lslocks: write error 00:05:17.511 20:04:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1654113 00:05:17.511 20:04:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 1654113 ']' 00:05:17.511 20:04:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 1654113 00:05:17.511 20:04:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:05:17.511 20:04:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:17.511 20:04:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1654113 00:05:17.511 20:04:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:17.511 20:04:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:17.511 20:04:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1654113' 00:05:17.511 killing process with pid 1654113 00:05:17.511 20:04:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 1654113 00:05:17.511 20:04:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 1654113 00:05:18.449 20:04:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1654333 00:05:18.449 20:04:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 1654333 ']' 00:05:18.449 20:04:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 1654333 00:05:18.449 20:04:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:05:18.449 20:04:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:18.449 20:04:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1654333 00:05:18.449 20:04:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:18.449 20:04:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:18.449 20:04:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1654333' 00:05:18.449 killing process with pid 1654333 00:05:18.449 20:04:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 1654333 00:05:18.449 20:04:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 1654333 00:05:18.708 00:05:18.708 real 0m3.264s 00:05:18.708 user 0m3.441s 00:05:18.708 sys 0m0.949s 00:05:18.708 20:04:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:18.708 20:04:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:18.708 ************************************ 00:05:18.708 END TEST locking_app_on_unlocked_coremask 00:05:18.709 ************************************ 00:05:18.709 20:04:05 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:18.709 20:04:05 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:18.709 20:04:05 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:18.709 20:04:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:18.709 ************************************ 00:05:18.709 START TEST locking_app_on_locked_coremask 00:05:18.709 ************************************ 00:05:18.709 20:04:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:05:18.709 20:04:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1654807 00:05:18.709 20:04:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1654807 /var/tmp/spdk.sock 00:05:18.709 20:04:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:18.709 20:04:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 1654807 ']' 00:05:18.709 20:04:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.709 20:04:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:18.709 20:04:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.709 20:04:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:18.709 20:04:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:18.709 [2024-05-16 20:04:05.737331] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:18.709 [2024-05-16 20:04:05.737408] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1654807 ] 00:05:18.709 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.709 [2024-05-16 20:04:05.793798] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.969 [2024-05-16 20:04:05.873499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.538 20:04:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:19.538 20:04:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:19.538 20:04:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:19.538 20:04:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1654819 00:05:19.538 20:04:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1654819 /var/tmp/spdk2.sock 00:05:19.538 20:04:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:19.538 20:04:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1654819 /var/tmp/spdk2.sock 00:05:19.538 20:04:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:19.538 20:04:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:19.538 20:04:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:19.538 20:04:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:19.538 20:04:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1654819 /var/tmp/spdk2.sock 00:05:19.538 20:04:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 1654819 ']' 00:05:19.538 20:04:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:19.538 20:04:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:19.538 20:04:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:19.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:19.538 20:04:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:19.538 20:04:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:19.538 [2024-05-16 20:04:06.570589] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:19.538 [2024-05-16 20:04:06.570634] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1654819 ] 00:05:19.538 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.538 [2024-05-16 20:04:06.639142] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1654807 has claimed it. 00:05:19.538 [2024-05-16 20:04:06.639179] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:20.107 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (1654819) - No such process 00:05:20.107 ERROR: process (pid: 1654819) is no longer running 00:05:20.107 20:04:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:20.107 20:04:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:05:20.107 20:04:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:20.107 20:04:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:20.107 20:04:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:20.107 20:04:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:20.107 20:04:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1654807 00:05:20.107 20:04:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1654807 00:05:20.107 20:04:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:20.676 lslocks: write error 00:05:20.676 20:04:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1654807 00:05:20.676 20:04:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 1654807 ']' 00:05:20.676 20:04:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 1654807 00:05:20.676 20:04:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:05:20.676 20:04:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:20.676 20:04:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1654807 00:05:20.676 20:04:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:20.676 20:04:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:20.676 20:04:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1654807' 00:05:20.676 killing process with pid 1654807 00:05:20.676 20:04:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 1654807 00:05:20.676 20:04:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 1654807 00:05:20.935 00:05:20.935 real 0m2.256s 00:05:20.935 user 0m2.464s 00:05:20.935 sys 0m0.604s 00:05:20.935 20:04:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:20.935 20:04:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:20.935 ************************************ 00:05:20.935 END TEST locking_app_on_locked_coremask 00:05:20.935 ************************************ 00:05:20.935 20:04:08 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:20.935 20:04:08 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:20.935 20:04:08 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:20.935 20:04:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:20.935 ************************************ 00:05:20.935 START TEST locking_overlapped_coremask 00:05:20.935 ************************************ 00:05:20.935 20:04:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:05:20.935 20:04:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1655105 00:05:20.935 20:04:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1655105 /var/tmp/spdk.sock 00:05:20.935 20:04:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:20.935 20:04:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 1655105 ']' 00:05:20.935 20:04:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.935 20:04:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:20.935 20:04:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.935 20:04:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:20.935 20:04:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:20.935 [2024-05-16 20:04:08.066205] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:20.935 [2024-05-16 20:04:08.066273] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1655105 ] 00:05:21.195 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.195 [2024-05-16 20:04:08.122212] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:21.195 [2024-05-16 20:04:08.206910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:21.195 [2024-05-16 20:04:08.207011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:21.195 [2024-05-16 20:04:08.207013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.762 20:04:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:21.762 20:04:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:21.762 20:04:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1655295 00:05:21.762 20:04:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1655295 /var/tmp/spdk2.sock 00:05:21.762 20:04:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:21.762 20:04:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1655295 /var/tmp/spdk2.sock 00:05:21.762 20:04:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:21.762 20:04:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:21.762 20:04:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:21.762 20:04:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:21.762 20:04:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:21.762 20:04:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1655295 /var/tmp/spdk2.sock 00:05:21.762 20:04:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 1655295 ']' 00:05:21.762 20:04:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:21.762 20:04:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:21.762 20:04:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:21.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:21.762 20:04:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:21.762 20:04:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:22.021 [2024-05-16 20:04:08.923783] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:22.021 [2024-05-16 20:04:08.923858] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1655295 ] 00:05:22.021 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.021 [2024-05-16 20:04:08.994616] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1655105 has claimed it. 00:05:22.021 [2024-05-16 20:04:08.994647] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:22.589 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (1655295) - No such process 00:05:22.589 ERROR: process (pid: 1655295) is no longer running 00:05:22.589 20:04:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:22.589 20:04:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:05:22.589 20:04:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:22.589 20:04:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:22.590 20:04:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:22.590 20:04:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:22.590 20:04:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:22.590 20:04:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:22.590 20:04:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:22.590 20:04:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:22.590 20:04:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1655105 00:05:22.590 20:04:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 1655105 ']' 00:05:22.590 20:04:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 1655105 00:05:22.590 20:04:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:05:22.590 20:04:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:22.590 20:04:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1655105 00:05:22.590 20:04:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:22.590 20:04:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:22.590 20:04:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1655105' 00:05:22.590 killing process with pid 1655105 00:05:22.590 20:04:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 1655105 00:05:22.590 20:04:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 1655105 00:05:22.849 00:05:22.849 real 0m1.900s 00:05:22.849 user 0m5.390s 00:05:22.849 sys 0m0.402s 00:05:22.849 20:04:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:22.849 20:04:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:22.849 ************************************ 00:05:22.849 END TEST locking_overlapped_coremask 00:05:22.849 ************************************ 00:05:22.849 20:04:09 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:22.849 20:04:09 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:22.849 20:04:09 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:22.849 20:04:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:23.109 ************************************ 00:05:23.109 START TEST locking_overlapped_coremask_via_rpc 00:05:23.109 ************************************ 00:05:23.109 20:04:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:05:23.109 20:04:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1655541 00:05:23.109 20:04:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:23.109 20:04:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1655541 /var/tmp/spdk.sock 00:05:23.109 20:04:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 1655541 ']' 00:05:23.109 20:04:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.109 20:04:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:23.109 20:04:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.109 20:04:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:23.109 20:04:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.109 [2024-05-16 20:04:10.017279] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:23.109 [2024-05-16 20:04:10.017330] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1655541 ] 00:05:23.109 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.109 [2024-05-16 20:04:10.067344] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:23.109 [2024-05-16 20:04:10.067378] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:23.109 [2024-05-16 20:04:10.153476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.109 [2024-05-16 20:04:10.153495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:23.109 [2024-05-16 20:04:10.153497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.369 20:04:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:23.369 20:04:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:23.369 20:04:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1655547 00:05:23.369 20:04:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1655547 /var/tmp/spdk2.sock 00:05:23.369 20:04:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:23.369 20:04:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 1655547 ']' 00:05:23.369 20:04:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:23.369 20:04:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:23.369 20:04:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:23.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:23.369 20:04:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:23.369 20:04:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.369 [2024-05-16 20:04:10.371123] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:23.369 [2024-05-16 20:04:10.371200] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1655547 ] 00:05:23.369 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.369 [2024-05-16 20:04:10.441828] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:23.369 [2024-05-16 20:04:10.441857] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:23.628 [2024-05-16 20:04:10.599136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:23.628 [2024-05-16 20:04:10.599249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:23.628 [2024-05-16 20:04:10.599250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:24.197 20:04:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:24.197 20:04:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:24.197 20:04:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:24.197 20:04:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:24.197 20:04:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.197 20:04:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:24.197 20:04:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:24.197 20:04:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:24.197 20:04:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:24.197 20:04:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:24.197 20:04:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:24.197 20:04:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:24.197 20:04:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:24.197 20:04:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:24.197 20:04:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:24.197 20:04:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.197 [2024-05-16 20:04:11.227517] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1655541 has claimed it. 00:05:24.197 request: 00:05:24.197 { 00:05:24.197 "method": "framework_enable_cpumask_locks", 00:05:24.197 "req_id": 1 00:05:24.197 } 00:05:24.197 Got JSON-RPC error response 00:05:24.197 response: 00:05:24.197 { 00:05:24.197 "code": -32603, 00:05:24.197 "message": "Failed to claim CPU core: 2" 00:05:24.197 } 00:05:24.197 20:04:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:24.197 20:04:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:24.197 20:04:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:24.197 20:04:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:24.197 20:04:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:24.197 20:04:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1655541 /var/tmp/spdk.sock 00:05:24.197 20:04:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 1655541 ']' 00:05:24.197 20:04:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.197 20:04:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:24.197 20:04:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.197 20:04:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:24.197 20:04:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.456 20:04:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:24.456 20:04:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:24.456 20:04:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1655547 /var/tmp/spdk2.sock 00:05:24.456 20:04:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 1655547 ']' 00:05:24.456 20:04:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:24.456 20:04:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:24.456 20:04:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:24.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:24.456 20:04:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:24.456 20:04:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.456 20:04:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:24.456 20:04:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:24.456 20:04:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:24.456 20:04:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:24.456 20:04:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:24.456 20:04:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:24.456 00:05:24.456 real 0m1.595s 00:05:24.456 user 0m0.743s 00:05:24.456 sys 0m0.135s 00:05:24.456 20:04:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:24.456 20:04:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.456 ************************************ 00:05:24.456 END TEST locking_overlapped_coremask_via_rpc 00:05:24.456 ************************************ 00:05:24.715 20:04:11 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:24.715 20:04:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1655541 ]] 00:05:24.715 20:04:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1655541 00:05:24.715 20:04:11 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 1655541 ']' 00:05:24.715 20:04:11 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 1655541 00:05:24.715 20:04:11 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:05:24.715 20:04:11 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:24.715 20:04:11 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1655541 00:05:24.715 20:04:11 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:24.715 20:04:11 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:24.715 20:04:11 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1655541' 00:05:24.715 killing process with pid 1655541 00:05:24.715 20:04:11 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 1655541 00:05:24.715 20:04:11 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 1655541 00:05:24.974 20:04:12 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1655547 ]] 00:05:24.974 20:04:12 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1655547 00:05:24.974 20:04:12 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 1655547 ']' 00:05:24.974 20:04:12 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 1655547 00:05:24.974 20:04:12 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:05:24.974 20:04:12 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:24.974 20:04:12 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1655547 00:05:24.974 20:04:12 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:05:24.974 20:04:12 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:05:24.974 20:04:12 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1655547' 00:05:24.974 killing process with pid 1655547 00:05:24.974 20:04:12 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 1655547 00:05:24.974 20:04:12 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 1655547 00:05:25.544 20:04:12 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:25.544 20:04:12 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:25.544 20:04:12 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1655541 ]] 00:05:25.544 20:04:12 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1655541 00:05:25.544 20:04:12 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 1655541 ']' 00:05:25.544 20:04:12 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 1655541 00:05:25.544 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (1655541) - No such process 00:05:25.544 20:04:12 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 1655541 is not found' 00:05:25.544 Process with pid 1655541 is not found 00:05:25.544 20:04:12 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1655547 ]] 00:05:25.544 20:04:12 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1655547 00:05:25.545 20:04:12 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 1655547 ']' 00:05:25.545 20:04:12 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 1655547 00:05:25.545 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (1655547) - No such process 00:05:25.545 20:04:12 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 1655547 is not found' 00:05:25.545 Process with pid 1655547 is not found 00:05:25.545 20:04:12 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:25.545 00:05:25.545 real 0m14.858s 00:05:25.545 user 0m25.756s 00:05:25.545 sys 0m4.650s 00:05:25.545 20:04:12 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:25.545 20:04:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:25.545 ************************************ 00:05:25.545 END TEST cpu_locks 00:05:25.545 ************************************ 00:05:25.545 00:05:25.545 real 0m39.658s 00:05:25.545 user 1m15.386s 00:05:25.545 sys 0m8.204s 00:05:25.545 20:04:12 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:25.545 20:04:12 event -- common/autotest_common.sh@10 -- # set +x 00:05:25.545 ************************************ 00:05:25.545 END TEST event 00:05:25.545 ************************************ 00:05:25.545 20:04:12 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:05:25.545 20:04:12 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:25.545 20:04:12 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:25.545 20:04:12 -- common/autotest_common.sh@10 -- # set +x 00:05:25.545 ************************************ 00:05:25.545 START TEST thread 00:05:25.545 ************************************ 00:05:25.545 20:04:12 thread -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:05:25.545 * Looking for test storage... 00:05:25.545 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread 00:05:25.545 20:04:12 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:25.545 20:04:12 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:05:25.545 20:04:12 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:25.545 20:04:12 thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.545 ************************************ 00:05:25.545 START TEST thread_poller_perf 00:05:25.545 ************************************ 00:05:25.545 20:04:12 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:25.545 [2024-05-16 20:04:12.659913] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:25.545 [2024-05-16 20:04:12.659980] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1656078 ] 00:05:25.545 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.805 [2024-05-16 20:04:12.714031] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.805 [2024-05-16 20:04:12.787411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.805 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:26.743 ====================================== 00:05:26.743 busy:2105395292 (cyc) 00:05:26.743 total_run_count: 837000 00:05:26.743 tsc_hz: 2100000000 (cyc) 00:05:26.743 ====================================== 00:05:26.743 poller_cost: 2515 (cyc), 1197 (nsec) 00:05:26.743 00:05:26.743 real 0m1.215s 00:05:26.743 user 0m1.136s 00:05:26.743 sys 0m0.073s 00:05:26.744 20:04:13 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:26.744 20:04:13 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:26.744 ************************************ 00:05:26.744 END TEST thread_poller_perf 00:05:26.744 ************************************ 00:05:27.003 20:04:13 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:27.003 20:04:13 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:05:27.003 20:04:13 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:27.003 20:04:13 thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.003 ************************************ 00:05:27.003 START TEST thread_poller_perf 00:05:27.003 ************************************ 00:05:27.003 20:04:13 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:27.003 [2024-05-16 20:04:13.949265] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:27.003 [2024-05-16 20:04:13.949364] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1656324 ] 00:05:27.003 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.003 [2024-05-16 20:04:14.010724] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.003 [2024-05-16 20:04:14.091895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.003 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:28.381 ====================================== 00:05:28.381 busy:2101010168 (cyc) 00:05:28.381 total_run_count: 13078000 00:05:28.381 tsc_hz: 2100000000 (cyc) 00:05:28.381 ====================================== 00:05:28.381 poller_cost: 160 (cyc), 76 (nsec) 00:05:28.381 00:05:28.381 real 0m1.229s 00:05:28.381 user 0m1.148s 00:05:28.381 sys 0m0.077s 00:05:28.381 20:04:15 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:28.381 20:04:15 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:28.381 ************************************ 00:05:28.381 END TEST thread_poller_perf 00:05:28.381 ************************************ 00:05:28.381 20:04:15 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:05:28.381 20:04:15 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:05:28.381 20:04:15 thread -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:28.381 20:04:15 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:28.381 20:04:15 thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.381 ************************************ 00:05:28.381 START TEST thread_spdk_lock 00:05:28.381 ************************************ 00:05:28.381 20:04:15 thread.thread_spdk_lock -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:05:28.381 [2024-05-16 20:04:15.254882] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:28.381 [2024-05-16 20:04:15.254993] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1656562 ] 00:05:28.381 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.381 [2024-05-16 20:04:15.314713] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:28.381 [2024-05-16 20:04:15.393415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.381 [2024-05-16 20:04:15.393417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.949 [2024-05-16 20:04:15.890043] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 961:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:05:28.949 [2024-05-16 20:04:15.890081] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3072:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:05:28.949 [2024-05-16 20:04:15.890090] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3027:sspin_stacks_print: *ERROR*: spinlock 0x14cb200 00:05:28.949 [2024-05-16 20:04:15.890967] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 856:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:05:28.949 [2024-05-16 20:04:15.891073] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1022:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:05:28.949 [2024-05-16 20:04:15.891090] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 856:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:05:28.949 Starting test contend 00:05:28.949 Worker Delay Wait us Hold us Total us 00:05:28.949 0 3 168460 190332 358793 00:05:28.949 1 5 84117 289807 373924 00:05:28.949 PASS test contend 00:05:28.949 Starting test hold_by_poller 00:05:28.949 PASS test hold_by_poller 00:05:28.949 Starting test hold_by_message 00:05:28.949 PASS test hold_by_message 00:05:28.949 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock summary: 00:05:28.949 100014 assertions passed 00:05:28.949 0 assertions failed 00:05:28.949 00:05:28.949 real 0m0.720s 00:05:28.949 user 0m1.129s 00:05:28.949 sys 0m0.085s 00:05:28.949 20:04:15 thread.thread_spdk_lock -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:28.949 20:04:15 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:05:28.949 ************************************ 00:05:28.949 END TEST thread_spdk_lock 00:05:28.949 ************************************ 00:05:28.949 00:05:28.949 real 0m3.468s 00:05:28.949 user 0m3.516s 00:05:28.949 sys 0m0.448s 00:05:28.949 20:04:15 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:28.949 20:04:15 thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.949 ************************************ 00:05:28.949 END TEST thread 00:05:28.949 ************************************ 00:05:28.949 20:04:16 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel.sh 00:05:28.949 20:04:16 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:28.949 20:04:16 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:28.949 20:04:16 -- common/autotest_common.sh@10 -- # set +x 00:05:28.949 ************************************ 00:05:28.949 START TEST accel 00:05:28.949 ************************************ 00:05:28.949 20:04:16 accel -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel.sh 00:05:29.209 * Looking for test storage... 00:05:29.209 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel 00:05:29.209 20:04:16 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:29.209 20:04:16 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:29.209 20:04:16 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:29.209 20:04:16 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=1656636 00:05:29.209 20:04:16 accel -- accel/accel.sh@63 -- # waitforlisten 1656636 00:05:29.209 20:04:16 accel -- common/autotest_common.sh@827 -- # '[' -z 1656636 ']' 00:05:29.209 20:04:16 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.209 20:04:16 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:29.209 20:04:16 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:29.209 20:04:16 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:29.209 20:04:16 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.209 20:04:16 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:29.209 20:04:16 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:29.209 20:04:16 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:29.209 20:04:16 accel -- common/autotest_common.sh@10 -- # set +x 00:05:29.209 20:04:16 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:29.209 20:04:16 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:29.209 20:04:16 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:29.209 20:04:16 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:29.209 20:04:16 accel -- accel/accel.sh@41 -- # jq -r . 00:05:29.209 [2024-05-16 20:04:16.171544] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:29.209 [2024-05-16 20:04:16.171619] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1656636 ] 00:05:29.209 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.209 [2024-05-16 20:04:16.226723] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.209 [2024-05-16 20:04:16.308493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.146 20:04:16 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:30.146 20:04:16 accel -- common/autotest_common.sh@860 -- # return 0 00:05:30.146 20:04:16 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:30.146 20:04:16 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:30.146 20:04:16 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:30.146 20:04:16 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:30.146 20:04:16 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:30.146 20:04:16 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:30.146 20:04:16 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:30.146 20:04:16 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.146 20:04:16 accel -- common/autotest_common.sh@10 -- # set +x 00:05:30.146 20:04:16 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.146 20:04:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:30.146 20:04:17 accel -- accel/accel.sh@72 -- # IFS== 00:05:30.146 20:04:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:30.146 20:04:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:30.146 20:04:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:30.146 20:04:17 accel -- accel/accel.sh@72 -- # IFS== 00:05:30.146 20:04:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:30.146 20:04:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:30.146 20:04:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:30.146 20:04:17 accel -- accel/accel.sh@72 -- # IFS== 00:05:30.146 20:04:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:30.146 20:04:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:30.146 20:04:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:30.146 20:04:17 accel -- accel/accel.sh@72 -- # IFS== 00:05:30.146 20:04:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:30.146 20:04:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:30.146 20:04:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:30.146 20:04:17 accel -- accel/accel.sh@72 -- # IFS== 00:05:30.146 20:04:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:30.146 20:04:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:30.146 20:04:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:30.146 20:04:17 accel -- accel/accel.sh@72 -- # IFS== 00:05:30.146 20:04:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:30.146 20:04:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:30.146 20:04:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:30.146 20:04:17 accel -- accel/accel.sh@72 -- # IFS== 00:05:30.146 20:04:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:30.146 20:04:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:30.146 20:04:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:30.146 20:04:17 accel -- accel/accel.sh@72 -- # IFS== 00:05:30.146 20:04:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:30.146 20:04:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:30.146 20:04:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:30.146 20:04:17 accel -- accel/accel.sh@72 -- # IFS== 00:05:30.146 20:04:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:30.146 20:04:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:30.146 20:04:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:30.146 20:04:17 accel -- accel/accel.sh@72 -- # IFS== 00:05:30.146 20:04:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:30.146 20:04:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:30.146 20:04:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:30.146 20:04:17 accel -- accel/accel.sh@72 -- # IFS== 00:05:30.146 20:04:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:30.146 20:04:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:30.146 20:04:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:30.146 20:04:17 accel -- accel/accel.sh@72 -- # IFS== 00:05:30.146 20:04:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:30.146 20:04:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:30.146 20:04:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:30.147 20:04:17 accel -- accel/accel.sh@72 -- # IFS== 00:05:30.147 20:04:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:30.147 20:04:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:30.147 20:04:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:30.147 20:04:17 accel -- accel/accel.sh@72 -- # IFS== 00:05:30.147 20:04:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:30.147 20:04:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:30.147 20:04:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:30.147 20:04:17 accel -- accel/accel.sh@72 -- # IFS== 00:05:30.147 20:04:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:30.147 20:04:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:30.147 20:04:17 accel -- accel/accel.sh@75 -- # killprocess 1656636 00:05:30.147 20:04:17 accel -- common/autotest_common.sh@946 -- # '[' -z 1656636 ']' 00:05:30.147 20:04:17 accel -- common/autotest_common.sh@950 -- # kill -0 1656636 00:05:30.147 20:04:17 accel -- common/autotest_common.sh@951 -- # uname 00:05:30.147 20:04:17 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:30.147 20:04:17 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1656636 00:05:30.147 20:04:17 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:30.147 20:04:17 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:30.147 20:04:17 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1656636' 00:05:30.147 killing process with pid 1656636 00:05:30.147 20:04:17 accel -- common/autotest_common.sh@965 -- # kill 1656636 00:05:30.147 20:04:17 accel -- common/autotest_common.sh@970 -- # wait 1656636 00:05:30.405 20:04:17 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:30.405 20:04:17 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:30.405 20:04:17 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:05:30.405 20:04:17 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:30.405 20:04:17 accel -- common/autotest_common.sh@10 -- # set +x 00:05:30.405 20:04:17 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:05:30.405 20:04:17 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:30.405 20:04:17 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:30.405 20:04:17 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:30.405 20:04:17 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:30.405 20:04:17 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:30.405 20:04:17 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:30.405 20:04:17 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:30.405 20:04:17 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:30.405 20:04:17 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:30.405 20:04:17 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:30.405 20:04:17 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:30.405 20:04:17 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:30.405 20:04:17 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:30.405 20:04:17 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:30.405 20:04:17 accel -- common/autotest_common.sh@10 -- # set +x 00:05:30.405 ************************************ 00:05:30.405 START TEST accel_missing_filename 00:05:30.405 ************************************ 00:05:30.405 20:04:17 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:05:30.405 20:04:17 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:05:30.405 20:04:17 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:30.405 20:04:17 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:30.405 20:04:17 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:30.405 20:04:17 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:30.405 20:04:17 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:30.405 20:04:17 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:05:30.405 20:04:17 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:30.405 20:04:17 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:30.405 20:04:17 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:30.405 20:04:17 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:30.405 20:04:17 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:30.405 20:04:17 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:30.405 20:04:17 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:30.405 20:04:17 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:30.405 20:04:17 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:30.664 [2024-05-16 20:04:17.557922] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:30.664 [2024-05-16 20:04:17.558020] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1656893 ] 00:05:30.664 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.664 [2024-05-16 20:04:17.620203] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.664 [2024-05-16 20:04:17.703519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.664 [2024-05-16 20:04:17.749409] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:30.923 [2024-05-16 20:04:17.819001] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:05:30.923 A filename is required. 00:05:30.923 20:04:17 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:05:30.923 20:04:17 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:30.923 20:04:17 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:05:30.923 20:04:17 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:05:30.923 20:04:17 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:05:30.923 20:04:17 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:30.923 00:05:30.923 real 0m0.356s 00:05:30.923 user 0m0.262s 00:05:30.923 sys 0m0.131s 00:05:30.923 20:04:17 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:30.923 20:04:17 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:30.923 ************************************ 00:05:30.923 END TEST accel_missing_filename 00:05:30.923 ************************************ 00:05:30.923 20:04:17 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:30.923 20:04:17 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:05:30.923 20:04:17 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:30.923 20:04:17 accel -- common/autotest_common.sh@10 -- # set +x 00:05:30.923 ************************************ 00:05:30.923 START TEST accel_compress_verify 00:05:30.923 ************************************ 00:05:30.923 20:04:17 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:30.923 20:04:17 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:05:30.923 20:04:17 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:30.923 20:04:17 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:30.923 20:04:17 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:30.923 20:04:17 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:30.923 20:04:17 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:30.923 20:04:17 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:30.923 20:04:17 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:30.923 20:04:17 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:30.923 20:04:17 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:30.923 20:04:17 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:30.923 20:04:17 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:30.923 20:04:17 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:30.923 20:04:17 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:30.923 20:04:17 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:30.923 20:04:17 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:05:30.923 [2024-05-16 20:04:17.988860] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:30.923 [2024-05-16 20:04:17.988933] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1657121 ] 00:05:30.923 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.923 [2024-05-16 20:04:18.045918] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.182 [2024-05-16 20:04:18.123958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.182 [2024-05-16 20:04:18.169105] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:31.182 [2024-05-16 20:04:18.238088] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:05:31.182 00:05:31.182 Compression does not support the verify option, aborting. 00:05:31.182 20:04:18 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:05:31.182 20:04:18 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:31.182 20:04:18 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:05:31.182 20:04:18 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:05:31.182 20:04:18 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:05:31.182 20:04:18 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:31.182 00:05:31.182 real 0m0.342s 00:05:31.182 user 0m0.257s 00:05:31.182 sys 0m0.123s 00:05:31.182 20:04:18 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:31.182 20:04:18 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:31.182 ************************************ 00:05:31.182 END TEST accel_compress_verify 00:05:31.182 ************************************ 00:05:31.442 20:04:18 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:31.442 20:04:18 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:31.442 20:04:18 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:31.442 20:04:18 accel -- common/autotest_common.sh@10 -- # set +x 00:05:31.442 ************************************ 00:05:31.442 START TEST accel_wrong_workload 00:05:31.442 ************************************ 00:05:31.442 20:04:18 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:05:31.442 20:04:18 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:05:31.442 20:04:18 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:31.442 20:04:18 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:31.442 20:04:18 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:31.442 20:04:18 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:31.442 20:04:18 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:31.442 20:04:18 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:05:31.442 20:04:18 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:31.442 20:04:18 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:31.442 20:04:18 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:31.442 20:04:18 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:31.442 20:04:18 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:31.442 20:04:18 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:31.442 20:04:18 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:31.442 20:04:18 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:31.442 20:04:18 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:31.442 Unsupported workload type: foobar 00:05:31.442 [2024-05-16 20:04:18.401737] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:31.442 accel_perf options: 00:05:31.442 [-h help message] 00:05:31.442 [-q queue depth per core] 00:05:31.442 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:31.442 [-T number of threads per core 00:05:31.442 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:31.442 [-t time in seconds] 00:05:31.442 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:31.442 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:31.442 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:31.442 [-l for compress/decompress workloads, name of uncompressed input file 00:05:31.442 [-S for crc32c workload, use this seed value (default 0) 00:05:31.442 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:31.442 [-f for fill workload, use this BYTE value (default 255) 00:05:31.442 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:31.442 [-y verify result if this switch is on] 00:05:31.442 [-a tasks to allocate per core (default: same value as -q)] 00:05:31.442 Can be used to spread operations across a wider range of memory. 00:05:31.442 20:04:18 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:05:31.442 20:04:18 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:31.442 20:04:18 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:31.442 20:04:18 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:31.442 00:05:31.442 real 0m0.024s 00:05:31.442 user 0m0.012s 00:05:31.442 sys 0m0.012s 00:05:31.442 20:04:18 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:31.442 20:04:18 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:31.442 ************************************ 00:05:31.442 END TEST accel_wrong_workload 00:05:31.442 ************************************ 00:05:31.442 Error: writing output failed: Broken pipe 00:05:31.442 20:04:18 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:31.442 20:04:18 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:05:31.442 20:04:18 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:31.442 20:04:18 accel -- common/autotest_common.sh@10 -- # set +x 00:05:31.442 ************************************ 00:05:31.442 START TEST accel_negative_buffers 00:05:31.442 ************************************ 00:05:31.442 20:04:18 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:31.442 20:04:18 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:05:31.442 20:04:18 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:31.442 20:04:18 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:31.442 20:04:18 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:31.442 20:04:18 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:31.442 20:04:18 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:31.442 20:04:18 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:05:31.442 20:04:18 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:31.442 20:04:18 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:31.442 20:04:18 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:31.442 20:04:18 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:31.442 20:04:18 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:31.442 20:04:18 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:31.442 20:04:18 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:31.442 20:04:18 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:31.442 20:04:18 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:31.442 -x option must be non-negative. 00:05:31.442 [2024-05-16 20:04:18.501812] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:31.442 accel_perf options: 00:05:31.442 [-h help message] 00:05:31.442 [-q queue depth per core] 00:05:31.442 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:31.442 [-T number of threads per core 00:05:31.442 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:31.442 [-t time in seconds] 00:05:31.442 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:31.442 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:31.442 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:31.442 [-l for compress/decompress workloads, name of uncompressed input file 00:05:31.442 [-S for crc32c workload, use this seed value (default 0) 00:05:31.442 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:31.442 [-f for fill workload, use this BYTE value (default 255) 00:05:31.442 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:31.442 [-y verify result if this switch is on] 00:05:31.442 [-a tasks to allocate per core (default: same value as -q)] 00:05:31.442 Can be used to spread operations across a wider range of memory. 00:05:31.442 20:04:18 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:05:31.442 20:04:18 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:31.442 20:04:18 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:31.442 20:04:18 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:31.442 00:05:31.442 real 0m0.025s 00:05:31.442 user 0m0.011s 00:05:31.442 sys 0m0.014s 00:05:31.442 20:04:18 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:31.442 20:04:18 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:31.442 ************************************ 00:05:31.442 END TEST accel_negative_buffers 00:05:31.442 ************************************ 00:05:31.442 Error: writing output failed: Broken pipe 00:05:31.442 20:04:18 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:31.442 20:04:18 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:05:31.442 20:04:18 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:31.442 20:04:18 accel -- common/autotest_common.sh@10 -- # set +x 00:05:31.442 ************************************ 00:05:31.442 START TEST accel_crc32c 00:05:31.442 ************************************ 00:05:31.442 20:04:18 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:31.442 20:04:18 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:31.443 20:04:18 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:31.443 20:04:18 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:31.443 20:04:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.443 20:04:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.443 20:04:18 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:31.443 20:04:18 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:31.443 20:04:18 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:31.443 20:04:18 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:31.443 20:04:18 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:31.443 20:04:18 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:31.443 20:04:18 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:31.443 20:04:18 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:31.443 20:04:18 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:31.702 [2024-05-16 20:04:18.591807] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:31.702 [2024-05-16 20:04:18.591864] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1657188 ] 00:05:31.702 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.702 [2024-05-16 20:04:18.645650] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.702 [2024-05-16 20:04:18.722928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.702 20:04:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.079 20:04:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:33.079 20:04:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.079 20:04:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.079 20:04:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.079 20:04:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:33.079 20:04:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.079 20:04:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.079 20:04:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.079 20:04:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:33.079 20:04:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.079 20:04:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.079 20:04:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.079 20:04:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:33.079 20:04:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.079 20:04:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.079 20:04:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.079 20:04:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:33.079 20:04:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.079 20:04:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.079 20:04:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.079 20:04:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:33.079 20:04:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.079 20:04:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.079 20:04:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.079 20:04:19 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:33.079 20:04:19 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:33.079 20:04:19 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:33.079 00:05:33.079 real 0m1.322s 00:05:33.079 user 0m1.212s 00:05:33.079 sys 0m0.123s 00:05:33.080 20:04:19 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:33.080 20:04:19 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:33.080 ************************************ 00:05:33.080 END TEST accel_crc32c 00:05:33.080 ************************************ 00:05:33.080 20:04:19 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:33.080 20:04:19 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:05:33.080 20:04:19 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:33.080 20:04:19 accel -- common/autotest_common.sh@10 -- # set +x 00:05:33.080 ************************************ 00:05:33.080 START TEST accel_crc32c_C2 00:05:33.080 ************************************ 00:05:33.080 20:04:19 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:33.080 20:04:19 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:33.080 20:04:19 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:33.080 20:04:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.080 20:04:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.080 20:04:19 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:33.080 20:04:19 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:33.080 20:04:19 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:33.080 20:04:19 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:33.080 20:04:19 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:33.080 20:04:19 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:33.080 20:04:19 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:33.080 20:04:19 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:33.080 20:04:19 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:33.080 20:04:19 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:33.080 [2024-05-16 20:04:19.995591] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:33.080 [2024-05-16 20:04:19.995658] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1657430 ] 00:05:33.080 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.080 [2024-05-16 20:04:20.054910] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.080 [2024-05-16 20:04:20.138735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.080 20:04:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:34.458 20:04:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:34.458 20:04:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.458 20:04:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:34.458 20:04:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:34.458 20:04:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:34.458 20:04:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.458 20:04:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:34.458 20:04:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:34.458 20:04:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:34.458 20:04:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.458 20:04:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:34.458 20:04:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:34.458 20:04:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:34.458 20:04:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.458 20:04:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:34.458 20:04:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:34.458 20:04:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:34.458 20:04:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.458 20:04:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:34.458 20:04:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:34.458 20:04:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:34.458 20:04:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.458 20:04:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:34.458 20:04:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:34.458 20:04:21 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:34.458 20:04:21 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:34.458 20:04:21 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:34.458 00:05:34.458 real 0m1.357s 00:05:34.458 user 0m1.244s 00:05:34.458 sys 0m0.124s 00:05:34.458 20:04:21 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:34.458 20:04:21 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:34.458 ************************************ 00:05:34.458 END TEST accel_crc32c_C2 00:05:34.458 ************************************ 00:05:34.458 20:04:21 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:34.458 20:04:21 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:34.458 20:04:21 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:34.458 20:04:21 accel -- common/autotest_common.sh@10 -- # set +x 00:05:34.458 ************************************ 00:05:34.458 START TEST accel_copy 00:05:34.458 ************************************ 00:05:34.458 20:04:21 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:05:34.458 20:04:21 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:34.458 20:04:21 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:05:34.458 20:04:21 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:34.458 20:04:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.458 20:04:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.458 20:04:21 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:34.458 20:04:21 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:34.458 20:04:21 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:34.458 20:04:21 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:34.458 20:04:21 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:34.458 20:04:21 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:34.458 20:04:21 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:34.458 20:04:21 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:34.458 20:04:21 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:05:34.458 [2024-05-16 20:04:21.411302] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:34.458 [2024-05-16 20:04:21.411336] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1657670 ] 00:05:34.458 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.458 [2024-05-16 20:04:21.461819] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.458 [2024-05-16 20:04:21.537258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.458 20:04:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:34.458 20:04:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.458 20:04:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.458 20:04:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.458 20:04:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:34.458 20:04:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.458 20:04:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.458 20:04:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.458 20:04:21 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:05:34.459 20:04:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.459 20:04:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.459 20:04:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.459 20:04:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:34.459 20:04:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.459 20:04:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.459 20:04:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.459 20:04:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:34.459 20:04:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.459 20:04:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.459 20:04:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.459 20:04:21 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:05:34.459 20:04:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.459 20:04:21 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:05:34.459 20:04:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.459 20:04:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.459 20:04:21 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:34.459 20:04:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.459 20:04:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.459 20:04:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.459 20:04:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:34.459 20:04:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.459 20:04:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.459 20:04:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.459 20:04:21 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:05:34.459 20:04:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.459 20:04:21 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:34.459 20:04:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.459 20:04:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.459 20:04:21 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:34.459 20:04:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.459 20:04:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.459 20:04:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.459 20:04:21 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:34.459 20:04:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.459 20:04:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.459 20:04:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.459 20:04:21 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:05:34.459 20:04:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.459 20:04:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.459 20:04:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.459 20:04:21 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:34.459 20:04:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.459 20:04:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.459 20:04:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.459 20:04:21 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:05:34.459 20:04:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.459 20:04:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.459 20:04:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.459 20:04:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:34.459 20:04:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.459 20:04:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.459 20:04:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.459 20:04:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:34.459 20:04:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.459 20:04:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.459 20:04:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:35.838 20:04:22 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:35.838 20:04:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:35.838 20:04:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:35.838 20:04:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:35.838 20:04:22 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:35.838 20:04:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:35.838 20:04:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:35.838 20:04:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:35.838 20:04:22 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:35.838 20:04:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:35.838 20:04:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:35.838 20:04:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:35.838 20:04:22 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:35.838 20:04:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:35.838 20:04:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:35.838 20:04:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:35.838 20:04:22 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:35.838 20:04:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:35.838 20:04:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:35.838 20:04:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:35.838 20:04:22 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:35.838 20:04:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:35.838 20:04:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:35.838 20:04:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:35.838 20:04:22 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:35.838 20:04:22 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:35.838 20:04:22 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:35.838 00:05:35.838 real 0m1.327s 00:05:35.838 user 0m1.228s 00:05:35.838 sys 0m0.111s 00:05:35.838 20:04:22 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:35.838 20:04:22 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:05:35.838 ************************************ 00:05:35.838 END TEST accel_copy 00:05:35.838 ************************************ 00:05:35.838 20:04:22 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:35.838 20:04:22 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:05:35.838 20:04:22 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:35.838 20:04:22 accel -- common/autotest_common.sh@10 -- # set +x 00:05:35.838 ************************************ 00:05:35.838 START TEST accel_fill 00:05:35.838 ************************************ 00:05:35.838 20:04:22 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:35.838 20:04:22 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:05:35.838 20:04:22 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:05:35.838 20:04:22 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:35.838 20:04:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:35.838 20:04:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:35.838 20:04:22 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:35.838 20:04:22 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:05:35.838 20:04:22 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:35.838 20:04:22 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:35.838 20:04:22 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:35.838 20:04:22 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:35.838 20:04:22 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:35.838 20:04:22 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:05:35.838 20:04:22 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:05:35.838 [2024-05-16 20:04:22.809440] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:35.838 [2024-05-16 20:04:22.809499] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1657905 ] 00:05:35.838 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.838 [2024-05-16 20:04:22.861885] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.838 [2024-05-16 20:04:22.937254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.838 20:04:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:35.838 20:04:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:35.838 20:04:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:35.838 20:04:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:35.838 20:04:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:36.097 20:04:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:36.097 20:04:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.097 20:04:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:36.097 20:04:22 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:05:36.097 20:04:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:36.097 20:04:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.097 20:04:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:36.097 20:04:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:36.097 20:04:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:36.097 20:04:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.097 20:04:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:36.097 20:04:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:36.097 20:04:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:36.097 20:04:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.097 20:04:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:36.097 20:04:22 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:05:36.097 20:04:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:36.097 20:04:22 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:05:36.097 20:04:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.097 20:04:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:36.097 20:04:22 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:05:36.097 20:04:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:36.097 20:04:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.097 20:04:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:36.097 20:04:22 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:36.097 20:04:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:36.097 20:04:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.097 20:04:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:36.097 20:04:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:36.097 20:04:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:36.098 20:04:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.098 20:04:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:36.098 20:04:22 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:05:36.098 20:04:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:36.098 20:04:22 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:05:36.098 20:04:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.098 20:04:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:36.098 20:04:22 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:36.098 20:04:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:36.098 20:04:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.098 20:04:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:36.098 20:04:22 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:36.098 20:04:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:36.098 20:04:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.098 20:04:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:36.098 20:04:22 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:05:36.098 20:04:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:36.098 20:04:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.098 20:04:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:36.098 20:04:22 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:05:36.098 20:04:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:36.098 20:04:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.098 20:04:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:36.098 20:04:22 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:05:36.098 20:04:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:36.098 20:04:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.098 20:04:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:36.098 20:04:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:36.098 20:04:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:36.098 20:04:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.098 20:04:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:36.098 20:04:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:36.098 20:04:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:36.098 20:04:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.098 20:04:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:37.039 20:04:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:37.039 20:04:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:37.039 20:04:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:37.039 20:04:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:37.039 20:04:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:37.039 20:04:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:37.039 20:04:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:37.039 20:04:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:37.039 20:04:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:37.039 20:04:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:37.039 20:04:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:37.039 20:04:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:37.039 20:04:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:37.039 20:04:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:37.039 20:04:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:37.039 20:04:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:37.039 20:04:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:37.039 20:04:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:37.039 20:04:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:37.039 20:04:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:37.039 20:04:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:37.039 20:04:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:37.039 20:04:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:37.039 20:04:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:37.039 20:04:24 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:37.039 20:04:24 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:37.039 20:04:24 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:37.039 00:05:37.039 real 0m1.333s 00:05:37.039 user 0m1.227s 00:05:37.039 sys 0m0.120s 00:05:37.039 20:04:24 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:37.039 20:04:24 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:05:37.039 ************************************ 00:05:37.039 END TEST accel_fill 00:05:37.039 ************************************ 00:05:37.039 20:04:24 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:37.039 20:04:24 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:37.039 20:04:24 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:37.039 20:04:24 accel -- common/autotest_common.sh@10 -- # set +x 00:05:37.298 ************************************ 00:05:37.298 START TEST accel_copy_crc32c 00:05:37.298 ************************************ 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:37.298 [2024-05-16 20:04:24.207192] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:37.298 [2024-05-16 20:04:24.207243] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1658148 ] 00:05:37.298 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.298 [2024-05-16 20:04:24.259459] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.298 [2024-05-16 20:04:24.334508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.298 20:04:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:38.676 20:04:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:38.676 20:04:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:38.676 20:04:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:38.676 20:04:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:38.676 20:04:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:38.676 20:04:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:38.676 20:04:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:38.676 20:04:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:38.676 20:04:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:38.676 20:04:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:38.676 20:04:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:38.676 20:04:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:38.676 20:04:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:38.676 20:04:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:38.676 20:04:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:38.676 20:04:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:38.676 20:04:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:38.676 20:04:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:38.676 20:04:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:38.676 20:04:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:38.676 20:04:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:38.676 20:04:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:38.676 20:04:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:38.676 20:04:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:38.676 20:04:25 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:38.676 20:04:25 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:38.676 20:04:25 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:38.676 00:05:38.676 real 0m1.332s 00:05:38.676 user 0m1.224s 00:05:38.676 sys 0m0.122s 00:05:38.676 20:04:25 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:38.676 20:04:25 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:38.676 ************************************ 00:05:38.676 END TEST accel_copy_crc32c 00:05:38.676 ************************************ 00:05:38.676 20:04:25 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:38.676 20:04:25 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:05:38.676 20:04:25 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:38.676 20:04:25 accel -- common/autotest_common.sh@10 -- # set +x 00:05:38.676 ************************************ 00:05:38.676 START TEST accel_copy_crc32c_C2 00:05:38.676 ************************************ 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:38.676 [2024-05-16 20:04:25.609369] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:38.676 [2024-05-16 20:04:25.609434] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1658385 ] 00:05:38.676 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.676 [2024-05-16 20:04:25.664630] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.676 [2024-05-16 20:04:25.739973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:38.676 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:38.677 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.677 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:38.677 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:38.677 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:38.677 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.677 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:38.677 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:38.677 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:38.677 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.677 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:38.677 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:38.677 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:38.677 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.677 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:38.677 20:04:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.056 20:04:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:40.056 20:04:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.056 20:04:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:40.056 20:04:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.056 20:04:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:40.056 20:04:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.056 20:04:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:40.056 20:04:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.056 20:04:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:40.056 20:04:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.056 20:04:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:40.056 20:04:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.056 20:04:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:40.056 20:04:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.056 20:04:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:40.056 20:04:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.056 20:04:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:40.056 20:04:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.056 20:04:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:40.056 20:04:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.056 20:04:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:40.056 20:04:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.056 20:04:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:40.056 20:04:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.056 20:04:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:40.056 20:04:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:40.056 20:04:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:40.056 00:05:40.056 real 0m1.339s 00:05:40.056 user 0m1.238s 00:05:40.056 sys 0m0.116s 00:05:40.056 20:04:26 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:40.056 20:04:26 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:40.056 ************************************ 00:05:40.056 END TEST accel_copy_crc32c_C2 00:05:40.056 ************************************ 00:05:40.056 20:04:26 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:40.056 20:04:26 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:40.056 20:04:26 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:40.056 20:04:26 accel -- common/autotest_common.sh@10 -- # set +x 00:05:40.056 ************************************ 00:05:40.056 START TEST accel_dualcast 00:05:40.056 ************************************ 00:05:40.056 20:04:27 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:05:40.056 20:04:27 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:05:40.056 20:04:27 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:05:40.056 20:04:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.056 20:04:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.056 20:04:27 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:40.056 20:04:27 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:40.056 20:04:27 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:05:40.056 20:04:27 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:40.056 20:04:27 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:40.056 20:04:27 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.056 20:04:27 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.056 20:04:27 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:40.056 20:04:27 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:05:40.056 20:04:27 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:05:40.056 [2024-05-16 20:04:27.021691] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:40.056 [2024-05-16 20:04:27.021757] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1658629 ] 00:05:40.056 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.056 [2024-05-16 20:04:27.078632] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.056 [2024-05-16 20:04:27.150643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.056 20:04:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:40.056 20:04:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.056 20:04:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.056 20:04:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.056 20:04:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:40.056 20:04:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.056 20:04:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.056 20:04:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.056 20:04:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:05:40.056 20:04:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.056 20:04:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.056 20:04:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.056 20:04:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:40.056 20:04:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.056 20:04:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.056 20:04:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.056 20:04:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:40.057 20:04:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.057 20:04:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.057 20:04:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.057 20:04:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:05:40.057 20:04:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.057 20:04:27 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:40.057 20:04:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.057 20:04:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.057 20:04:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:40.057 20:04:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.057 20:04:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.057 20:04:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.057 20:04:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:40.057 20:04:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.057 20:04:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.057 20:04:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.057 20:04:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:05:40.057 20:04:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.057 20:04:27 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:05:40.057 20:04:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.057 20:04:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.057 20:04:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:40.057 20:04:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.057 20:04:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.057 20:04:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.316 20:04:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:40.316 20:04:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.316 20:04:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.316 20:04:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.316 20:04:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:05:40.316 20:04:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.316 20:04:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.316 20:04:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.316 20:04:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:05:40.316 20:04:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.316 20:04:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.316 20:04:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.316 20:04:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:05:40.316 20:04:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.316 20:04:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.316 20:04:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.316 20:04:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:40.316 20:04:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.316 20:04:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.316 20:04:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.316 20:04:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:40.316 20:04:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.316 20:04:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.316 20:04:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:41.254 20:04:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:41.254 20:04:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:41.254 20:04:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:41.254 20:04:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:41.254 20:04:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:41.254 20:04:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:41.254 20:04:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:41.254 20:04:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:41.254 20:04:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:41.254 20:04:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:41.254 20:04:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:41.254 20:04:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:41.254 20:04:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:41.254 20:04:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:41.254 20:04:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:41.254 20:04:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:41.254 20:04:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:41.254 20:04:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:41.254 20:04:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:41.254 20:04:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:41.254 20:04:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:41.254 20:04:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:41.254 20:04:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:41.254 20:04:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:41.254 20:04:28 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:41.254 20:04:28 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:41.254 20:04:28 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:41.254 00:05:41.254 real 0m1.324s 00:05:41.254 user 0m1.222s 00:05:41.254 sys 0m0.114s 00:05:41.254 20:04:28 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:41.254 20:04:28 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:05:41.254 ************************************ 00:05:41.254 END TEST accel_dualcast 00:05:41.254 ************************************ 00:05:41.254 20:04:28 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:41.254 20:04:28 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:41.254 20:04:28 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:41.254 20:04:28 accel -- common/autotest_common.sh@10 -- # set +x 00:05:41.254 ************************************ 00:05:41.254 START TEST accel_compare 00:05:41.254 ************************************ 00:05:41.254 20:04:28 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:05:41.254 20:04:28 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:05:41.254 20:04:28 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:05:41.254 20:04:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:41.254 20:04:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:41.254 20:04:28 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:41.255 20:04:28 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:41.255 20:04:28 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:05:41.255 20:04:28 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:41.255 20:04:28 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:41.255 20:04:28 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:41.255 20:04:28 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:41.255 20:04:28 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:41.255 20:04:28 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:05:41.255 20:04:28 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:05:41.255 [2024-05-16 20:04:28.400706] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:41.255 [2024-05-16 20:04:28.400799] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1658871 ] 00:05:41.575 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.575 [2024-05-16 20:04:28.456788] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.575 [2024-05-16 20:04:28.532203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.575 20:04:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:41.575 20:04:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:41.575 20:04:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:41.575 20:04:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:41.575 20:04:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:41.575 20:04:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:41.575 20:04:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:41.575 20:04:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:41.575 20:04:28 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:05:41.575 20:04:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:41.575 20:04:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:41.575 20:04:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:41.575 20:04:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:41.575 20:04:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:41.575 20:04:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:41.575 20:04:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:41.575 20:04:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:41.575 20:04:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:41.575 20:04:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:41.575 20:04:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:41.575 20:04:28 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:05:41.575 20:04:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:41.575 20:04:28 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:05:41.575 20:04:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:41.575 20:04:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:41.575 20:04:28 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:41.575 20:04:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:41.575 20:04:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:41.575 20:04:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:41.575 20:04:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:41.575 20:04:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:41.575 20:04:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:41.575 20:04:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:41.575 20:04:28 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:05:41.575 20:04:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:41.575 20:04:28 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:05:41.575 20:04:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:41.575 20:04:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:41.575 20:04:28 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:41.575 20:04:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:41.575 20:04:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:41.575 20:04:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:41.575 20:04:28 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:41.575 20:04:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:41.575 20:04:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:41.575 20:04:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:41.575 20:04:28 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:05:41.575 20:04:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:41.575 20:04:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:41.575 20:04:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:41.575 20:04:28 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:05:41.575 20:04:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:41.575 20:04:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:41.575 20:04:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:41.575 20:04:28 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:05:41.575 20:04:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:41.575 20:04:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:41.575 20:04:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:41.575 20:04:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:41.575 20:04:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:41.575 20:04:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:41.575 20:04:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:41.575 20:04:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:41.575 20:04:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:41.575 20:04:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:41.575 20:04:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:42.955 20:04:29 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:42.955 20:04:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:42.955 20:04:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:42.955 20:04:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:42.955 20:04:29 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:42.955 20:04:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:42.955 20:04:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:42.955 20:04:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:42.955 20:04:29 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:42.955 20:04:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:42.955 20:04:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:42.955 20:04:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:42.955 20:04:29 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:42.955 20:04:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:42.955 20:04:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:42.955 20:04:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:42.955 20:04:29 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:42.955 20:04:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:42.955 20:04:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:42.955 20:04:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:42.955 20:04:29 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:42.955 20:04:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:42.955 20:04:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:42.955 20:04:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:42.955 20:04:29 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:42.955 20:04:29 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:42.955 20:04:29 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:42.955 00:05:42.955 real 0m1.340s 00:05:42.955 user 0m1.228s 00:05:42.955 sys 0m0.123s 00:05:42.955 20:04:29 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:42.955 20:04:29 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:05:42.955 ************************************ 00:05:42.955 END TEST accel_compare 00:05:42.955 ************************************ 00:05:42.955 20:04:29 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:42.955 20:04:29 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:42.955 20:04:29 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:42.955 20:04:29 accel -- common/autotest_common.sh@10 -- # set +x 00:05:42.955 ************************************ 00:05:42.955 START TEST accel_xor 00:05:42.955 ************************************ 00:05:42.955 20:04:29 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:42.955 [2024-05-16 20:04:29.802418] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:42.955 [2024-05-16 20:04:29.802490] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1659111 ] 00:05:42.955 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.955 [2024-05-16 20:04:29.859933] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.955 [2024-05-16 20:04:29.935249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:42.955 20:04:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:42.956 20:04:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:44.334 00:05:44.334 real 0m1.346s 00:05:44.334 user 0m1.229s 00:05:44.334 sys 0m0.128s 00:05:44.334 20:04:31 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:44.334 20:04:31 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:44.334 ************************************ 00:05:44.334 END TEST accel_xor 00:05:44.334 ************************************ 00:05:44.334 20:04:31 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:44.334 20:04:31 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:05:44.334 20:04:31 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:44.334 20:04:31 accel -- common/autotest_common.sh@10 -- # set +x 00:05:44.334 ************************************ 00:05:44.334 START TEST accel_xor 00:05:44.334 ************************************ 00:05:44.334 20:04:31 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:44.334 [2024-05-16 20:04:31.206282] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:44.334 [2024-05-16 20:04:31.206346] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1659355 ] 00:05:44.334 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.334 [2024-05-16 20:04:31.262805] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.334 [2024-05-16 20:04:31.338586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:44.334 20:04:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:44.335 20:04:31 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:44.335 20:04:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:44.335 20:04:31 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:44.335 20:04:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:44.335 20:04:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:44.335 20:04:31 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:44.335 20:04:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:44.335 20:04:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:44.335 20:04:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:44.335 20:04:31 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:44.335 20:04:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:44.335 20:04:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:44.335 20:04:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:44.335 20:04:31 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:44.335 20:04:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:44.335 20:04:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:44.335 20:04:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:44.335 20:04:31 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:44.335 20:04:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:44.335 20:04:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:44.335 20:04:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:44.335 20:04:31 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:44.335 20:04:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:44.335 20:04:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:44.335 20:04:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:44.335 20:04:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:44.335 20:04:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:44.335 20:04:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:44.335 20:04:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:44.335 20:04:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:44.335 20:04:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:44.335 20:04:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:44.335 20:04:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.711 20:04:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:45.711 20:04:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.711 20:04:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.711 20:04:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.711 20:04:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:45.711 20:04:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.711 20:04:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.711 20:04:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.711 20:04:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:45.711 20:04:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.711 20:04:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.711 20:04:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.711 20:04:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:45.711 20:04:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.711 20:04:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.711 20:04:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.711 20:04:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:45.711 20:04:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.711 20:04:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.711 20:04:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.711 20:04:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:45.711 20:04:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.711 20:04:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.711 20:04:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.711 20:04:32 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:45.711 20:04:32 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:45.711 20:04:32 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:45.711 00:05:45.711 real 0m1.345s 00:05:45.711 user 0m1.235s 00:05:45.711 sys 0m0.122s 00:05:45.711 20:04:32 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:45.711 20:04:32 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:45.711 ************************************ 00:05:45.711 END TEST accel_xor 00:05:45.711 ************************************ 00:05:45.711 20:04:32 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:45.711 20:04:32 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:05:45.711 20:04:32 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:45.711 20:04:32 accel -- common/autotest_common.sh@10 -- # set +x 00:05:45.711 ************************************ 00:05:45.711 START TEST accel_dif_verify 00:05:45.711 ************************************ 00:05:45.711 20:04:32 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:05:45.711 20:04:32 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:05:45.711 20:04:32 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:05:45.711 20:04:32 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:45.711 20:04:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:45.711 20:04:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:45.711 20:04:32 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:45.711 20:04:32 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:45.711 20:04:32 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:45.711 20:04:32 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:45.711 20:04:32 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:45.711 20:04:32 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:45.711 20:04:32 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:45.711 20:04:32 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:45.711 20:04:32 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:05:45.711 [2024-05-16 20:04:32.613726] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:45.711 [2024-05-16 20:04:32.613779] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1659592 ] 00:05:45.711 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.711 [2024-05-16 20:04:32.659633] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.711 [2024-05-16 20:04:32.734855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.711 20:04:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:45.711 20:04:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:45.712 20:04:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:47.087 20:04:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:47.087 20:04:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:47.087 20:04:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:47.087 20:04:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:47.087 20:04:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:47.087 20:04:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:47.087 20:04:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:47.087 20:04:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:47.087 20:04:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:47.087 20:04:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:47.087 20:04:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:47.087 20:04:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:47.087 20:04:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:47.087 20:04:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:47.087 20:04:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:47.087 20:04:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:47.087 20:04:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:47.087 20:04:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:47.087 20:04:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:47.087 20:04:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:47.087 20:04:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:47.087 20:04:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:47.087 20:04:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:47.087 20:04:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:47.087 20:04:33 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:47.087 20:04:33 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:47.087 20:04:33 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:47.087 00:05:47.087 real 0m1.326s 00:05:47.087 user 0m1.221s 00:05:47.087 sys 0m0.118s 00:05:47.087 20:04:33 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:47.087 20:04:33 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:05:47.087 ************************************ 00:05:47.087 END TEST accel_dif_verify 00:05:47.087 ************************************ 00:05:47.087 20:04:33 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:47.087 20:04:33 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:05:47.087 20:04:33 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:47.087 20:04:33 accel -- common/autotest_common.sh@10 -- # set +x 00:05:47.087 ************************************ 00:05:47.087 START TEST accel_dif_generate 00:05:47.087 ************************************ 00:05:47.087 20:04:33 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:05:47.087 20:04:33 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:05:47.087 20:04:33 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:05:47.087 20:04:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:47.087 20:04:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:47.087 20:04:33 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:47.087 20:04:33 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:05:47.087 20:04:33 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:47.087 20:04:34 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:47.087 20:04:34 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:47.087 20:04:34 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:47.087 20:04:34 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:47.087 20:04:34 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:47.087 20:04:34 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:05:47.087 20:04:34 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:05:47.087 [2024-05-16 20:04:34.016940] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:47.087 [2024-05-16 20:04:34.017006] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1659827 ] 00:05:47.087 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.087 [2024-05-16 20:04:34.074519] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.088 [2024-05-16 20:04:34.151598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:47.088 20:04:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:48.470 20:04:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:48.470 20:04:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:48.470 20:04:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:48.470 20:04:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:48.470 20:04:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:48.470 20:04:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:48.470 20:04:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:48.470 20:04:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:48.470 20:04:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:48.470 20:04:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:48.470 20:04:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:48.470 20:04:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:48.470 20:04:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:48.470 20:04:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:48.470 20:04:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:48.470 20:04:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:48.470 20:04:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:48.470 20:04:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:48.470 20:04:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:48.470 20:04:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:48.470 20:04:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:48.470 20:04:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:48.470 20:04:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:48.470 20:04:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:48.470 20:04:35 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:48.470 20:04:35 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:48.470 20:04:35 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:48.470 00:05:48.470 real 0m1.346s 00:05:48.470 user 0m1.237s 00:05:48.470 sys 0m0.124s 00:05:48.470 20:04:35 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:48.470 20:04:35 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:05:48.470 ************************************ 00:05:48.470 END TEST accel_dif_generate 00:05:48.470 ************************************ 00:05:48.470 20:04:35 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:48.470 20:04:35 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:05:48.470 20:04:35 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:48.470 20:04:35 accel -- common/autotest_common.sh@10 -- # set +x 00:05:48.470 ************************************ 00:05:48.470 START TEST accel_dif_generate_copy 00:05:48.470 ************************************ 00:05:48.470 20:04:35 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:05:48.470 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:48.470 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:05:48.470 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:48.470 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.470 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.470 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:48.470 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:48.470 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:48.470 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:48.470 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:48.470 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:48.470 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:48.470 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:48.470 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:05:48.470 [2024-05-16 20:04:35.418286] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:48.470 [2024-05-16 20:04:35.418337] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1660077 ] 00:05:48.470 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.470 [2024-05-16 20:04:35.470152] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.470 [2024-05-16 20:04:35.547584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.470 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:48.470 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.470 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.470 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.470 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:48.470 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.470 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.470 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.470 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:05:48.470 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.470 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.470 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.470 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:48.470 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.470 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.470 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.470 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:48.470 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.470 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.470 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.470 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:48.470 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.470 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:48.470 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.470 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.470 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:48.470 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.470 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.470 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.470 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:48.470 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.470 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.470 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.470 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:48.470 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.470 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.470 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.470 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:05:48.470 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.470 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:48.470 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.470 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.470 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:48.470 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.470 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.470 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.470 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:48.470 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.470 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.470 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.470 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:05:48.470 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.470 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.471 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.471 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:48.471 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.471 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.471 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.471 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:05:48.471 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.471 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.471 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.471 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:48.471 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.471 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.471 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.471 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:48.471 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.471 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.471 20:04:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:49.850 20:04:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:49.850 20:04:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:49.850 20:04:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:49.850 20:04:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:49.850 20:04:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:49.850 20:04:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:49.850 20:04:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:49.850 20:04:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:49.850 20:04:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:49.850 20:04:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:49.850 20:04:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:49.850 20:04:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:49.850 20:04:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:49.850 20:04:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:49.850 20:04:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:49.850 20:04:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:49.850 20:04:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:49.850 20:04:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:49.850 20:04:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:49.850 20:04:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:49.851 20:04:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:49.851 20:04:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:49.851 20:04:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:49.851 20:04:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:49.851 20:04:36 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:49.851 20:04:36 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:05:49.851 20:04:36 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:49.851 00:05:49.851 real 0m1.323s 00:05:49.851 user 0m1.231s 00:05:49.851 sys 0m0.106s 00:05:49.851 20:04:36 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:49.851 20:04:36 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:05:49.851 ************************************ 00:05:49.851 END TEST accel_dif_generate_copy 00:05:49.851 ************************************ 00:05:49.851 20:04:36 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:05:49.851 20:04:36 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:05:49.851 20:04:36 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:05:49.851 20:04:36 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:49.851 20:04:36 accel -- common/autotest_common.sh@10 -- # set +x 00:05:49.851 ************************************ 00:05:49.851 START TEST accel_comp 00:05:49.851 ************************************ 00:05:49.851 20:04:36 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:05:49.851 20:04:36 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:05:49.851 20:04:36 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:05:49.851 20:04:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:49.851 20:04:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:49.851 20:04:36 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:05:49.851 20:04:36 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:05:49.851 20:04:36 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:05:49.851 20:04:36 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:49.851 20:04:36 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:49.851 20:04:36 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:49.851 20:04:36 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:49.851 20:04:36 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:49.851 20:04:36 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:05:49.851 20:04:36 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:05:49.851 [2024-05-16 20:04:36.819608] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:49.851 [2024-05-16 20:04:36.819678] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1660313 ] 00:05:49.851 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.851 [2024-05-16 20:04:36.875304] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.851 [2024-05-16 20:04:36.950211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.110 20:04:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:50.110 20:04:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:50.110 20:04:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:50.110 20:04:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:50.110 20:04:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:50.110 20:04:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:50.110 20:04:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:50.110 20:04:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:50.110 20:04:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:50.110 20:04:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:50.110 20:04:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:50.110 20:04:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:50.110 20:04:36 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:05:50.110 20:04:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:50.110 20:04:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:50.110 20:04:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:50.110 20:04:37 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:50.110 20:04:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:50.110 20:04:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:50.110 20:04:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:50.110 20:04:37 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:50.110 20:04:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:50.110 20:04:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:50.110 20:04:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:50.110 20:04:37 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:05:50.110 20:04:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:50.110 20:04:37 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:05:50.110 20:04:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:50.110 20:04:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:50.110 20:04:37 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:50.110 20:04:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:50.110 20:04:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:50.110 20:04:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:50.110 20:04:37 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:50.110 20:04:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:50.110 20:04:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:50.110 20:04:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:50.110 20:04:37 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:05:50.110 20:04:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:50.110 20:04:37 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:05:50.110 20:04:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:50.110 20:04:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:50.110 20:04:37 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:05:50.110 20:04:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:50.110 20:04:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:50.110 20:04:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:50.110 20:04:37 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:50.110 20:04:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:50.110 20:04:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:50.110 20:04:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:50.110 20:04:37 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:50.110 20:04:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:50.110 20:04:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:50.110 20:04:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:50.110 20:04:37 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:05:50.110 20:04:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:50.110 20:04:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:50.110 20:04:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:50.110 20:04:37 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:50.110 20:04:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:50.110 20:04:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:50.110 20:04:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:50.110 20:04:37 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:05:50.110 20:04:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:50.110 20:04:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:50.110 20:04:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:50.110 20:04:37 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:50.110 20:04:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:50.110 20:04:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:50.110 20:04:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:50.110 20:04:37 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:50.110 20:04:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:50.110 20:04:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:50.110 20:04:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:51.049 20:04:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:51.049 20:04:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.049 20:04:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:51.049 20:04:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:51.049 20:04:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:51.049 20:04:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.049 20:04:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:51.049 20:04:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:51.049 20:04:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:51.049 20:04:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.049 20:04:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:51.049 20:04:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:51.049 20:04:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:51.049 20:04:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.049 20:04:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:51.049 20:04:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:51.049 20:04:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:51.049 20:04:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.049 20:04:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:51.049 20:04:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:51.049 20:04:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:51.049 20:04:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.049 20:04:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:51.049 20:04:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:51.049 20:04:38 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:51.049 20:04:38 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:05:51.049 20:04:38 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:51.049 00:05:51.049 real 0m1.331s 00:05:51.049 user 0m1.224s 00:05:51.049 sys 0m0.122s 00:05:51.049 20:04:38 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:51.049 20:04:38 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:05:51.049 ************************************ 00:05:51.049 END TEST accel_comp 00:05:51.049 ************************************ 00:05:51.049 20:04:38 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:51.049 20:04:38 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:05:51.049 20:04:38 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:51.049 20:04:38 accel -- common/autotest_common.sh@10 -- # set +x 00:05:51.049 ************************************ 00:05:51.049 START TEST accel_decomp 00:05:51.049 ************************************ 00:05:51.309 20:04:38 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:51.309 20:04:38 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:05:51.309 20:04:38 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:05:51.309 20:04:38 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:51.309 20:04:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:51.309 20:04:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:51.309 20:04:38 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:51.309 20:04:38 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:05:51.309 20:04:38 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:51.309 20:04:38 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:51.309 20:04:38 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:51.309 20:04:38 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:51.309 20:04:38 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:51.309 20:04:38 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:05:51.309 20:04:38 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:05:51.309 [2024-05-16 20:04:38.203450] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:51.309 [2024-05-16 20:04:38.203492] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1660550 ] 00:05:51.309 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.309 [2024-05-16 20:04:38.252354] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.309 [2024-05-16 20:04:38.327864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.309 20:04:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:51.309 20:04:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.309 20:04:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:51.309 20:04:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:51.309 20:04:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:51.309 20:04:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.309 20:04:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:51.309 20:04:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:51.309 20:04:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:51.309 20:04:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.309 20:04:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:51.309 20:04:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:51.309 20:04:38 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:05:51.309 20:04:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.309 20:04:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:51.309 20:04:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:51.309 20:04:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:51.309 20:04:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.309 20:04:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:51.309 20:04:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:51.309 20:04:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:51.309 20:04:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.309 20:04:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:51.309 20:04:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:51.309 20:04:38 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:05:51.309 20:04:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.309 20:04:38 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:51.309 20:04:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:51.309 20:04:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:51.309 20:04:38 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:51.309 20:04:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.309 20:04:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:51.309 20:04:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:51.309 20:04:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:51.309 20:04:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.309 20:04:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:51.309 20:04:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:51.309 20:04:38 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:05:51.309 20:04:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.309 20:04:38 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:05:51.309 20:04:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:51.309 20:04:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:51.309 20:04:38 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:05:51.309 20:04:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.309 20:04:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:51.309 20:04:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:51.309 20:04:38 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:51.309 20:04:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.309 20:04:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:51.309 20:04:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:51.309 20:04:38 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:51.310 20:04:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.310 20:04:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:51.310 20:04:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:51.310 20:04:38 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:05:51.310 20:04:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.310 20:04:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:51.310 20:04:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:51.310 20:04:38 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:51.310 20:04:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.310 20:04:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:51.310 20:04:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:51.310 20:04:38 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:05:51.310 20:04:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.310 20:04:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:51.310 20:04:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:51.310 20:04:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:51.310 20:04:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.310 20:04:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:51.310 20:04:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:51.310 20:04:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:51.310 20:04:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.310 20:04:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:51.310 20:04:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:52.689 20:04:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:52.689 20:04:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.689 20:04:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:52.689 20:04:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:52.689 20:04:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:52.689 20:04:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.689 20:04:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:52.689 20:04:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:52.689 20:04:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:52.689 20:04:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.689 20:04:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:52.689 20:04:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:52.689 20:04:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:52.689 20:04:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.689 20:04:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:52.689 20:04:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:52.689 20:04:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:52.689 20:04:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.689 20:04:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:52.689 20:04:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:52.689 20:04:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:52.689 20:04:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.689 20:04:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:52.689 20:04:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:52.689 20:04:39 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:52.689 20:04:39 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:52.689 20:04:39 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:52.689 00:05:52.689 real 0m1.327s 00:05:52.689 user 0m1.222s 00:05:52.689 sys 0m0.119s 00:05:52.689 20:04:39 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:52.689 20:04:39 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:05:52.689 ************************************ 00:05:52.689 END TEST accel_decomp 00:05:52.689 ************************************ 00:05:52.689 20:04:39 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:52.689 20:04:39 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:05:52.689 20:04:39 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:52.689 20:04:39 accel -- common/autotest_common.sh@10 -- # set +x 00:05:52.689 ************************************ 00:05:52.689 START TEST accel_decmop_full 00:05:52.689 ************************************ 00:05:52.689 20:04:39 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:52.689 20:04:39 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:05:52.689 20:04:39 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:05:52.689 20:04:39 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:52.689 20:04:39 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:52.689 20:04:39 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:52.689 20:04:39 accel.accel_decmop_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:52.689 20:04:39 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:05:52.689 20:04:39 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:52.689 20:04:39 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:52.689 20:04:39 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:52.689 20:04:39 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:52.689 20:04:39 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:52.689 20:04:39 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:05:52.689 20:04:39 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:05:52.689 [2024-05-16 20:04:39.600285] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:52.689 [2024-05-16 20:04:39.600371] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1660794 ] 00:05:52.689 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.689 [2024-05-16 20:04:39.656603] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.689 [2024-05-16 20:04:39.736611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.689 20:04:39 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:52.689 20:04:39 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:52.689 20:04:39 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:52.689 20:04:39 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:52.689 20:04:39 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:52.689 20:04:39 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:52.689 20:04:39 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:52.689 20:04:39 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:52.689 20:04:39 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:52.689 20:04:39 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:52.689 20:04:39 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:52.689 20:04:39 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:52.689 20:04:39 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:05:52.689 20:04:39 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:52.689 20:04:39 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:52.689 20:04:39 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:52.689 20:04:39 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:52.689 20:04:39 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:52.689 20:04:39 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:52.689 20:04:39 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:52.689 20:04:39 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:52.689 20:04:39 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:52.689 20:04:39 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:52.689 20:04:39 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:52.689 20:04:39 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:05:52.689 20:04:39 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:52.689 20:04:39 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:52.689 20:04:39 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:52.689 20:04:39 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:52.689 20:04:39 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:52.689 20:04:39 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:52.689 20:04:39 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:52.689 20:04:39 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:52.689 20:04:39 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:52.689 20:04:39 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:52.689 20:04:39 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:52.689 20:04:39 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:52.689 20:04:39 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:05:52.689 20:04:39 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:52.690 20:04:39 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:05:52.690 20:04:39 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:52.690 20:04:39 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:52.690 20:04:39 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:05:52.690 20:04:39 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:52.690 20:04:39 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:52.690 20:04:39 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:52.690 20:04:39 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:05:52.690 20:04:39 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:52.690 20:04:39 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:52.690 20:04:39 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:52.690 20:04:39 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:05:52.690 20:04:39 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:52.690 20:04:39 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:52.690 20:04:39 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:52.690 20:04:39 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:05:52.690 20:04:39 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:52.690 20:04:39 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:52.690 20:04:39 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:52.690 20:04:39 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:05:52.690 20:04:39 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:52.690 20:04:39 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:52.690 20:04:39 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:52.690 20:04:39 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:05:52.690 20:04:39 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:52.690 20:04:39 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:52.690 20:04:39 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:52.690 20:04:39 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:52.690 20:04:39 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:52.690 20:04:39 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:52.690 20:04:39 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:52.690 20:04:39 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:52.690 20:04:39 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:52.690 20:04:39 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:52.690 20:04:39 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:54.069 20:04:40 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:54.069 20:04:40 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:54.069 20:04:40 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:54.069 20:04:40 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:54.069 20:04:40 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:54.069 20:04:40 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:54.069 20:04:40 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:54.069 20:04:40 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:54.069 20:04:40 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:54.069 20:04:40 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:54.069 20:04:40 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:54.069 20:04:40 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:54.069 20:04:40 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:54.069 20:04:40 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:54.069 20:04:40 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:54.069 20:04:40 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:54.069 20:04:40 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:54.069 20:04:40 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:54.069 20:04:40 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:54.069 20:04:40 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:54.069 20:04:40 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:54.069 20:04:40 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:54.069 20:04:40 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:54.069 20:04:40 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:54.069 20:04:40 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:54.069 20:04:40 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:54.069 20:04:40 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:54.069 00:05:54.069 real 0m1.359s 00:05:54.069 user 0m1.242s 00:05:54.069 sys 0m0.128s 00:05:54.069 20:04:40 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:54.069 20:04:40 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:05:54.069 ************************************ 00:05:54.069 END TEST accel_decmop_full 00:05:54.069 ************************************ 00:05:54.069 20:04:40 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:54.069 20:04:40 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:05:54.069 20:04:40 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:54.069 20:04:40 accel -- common/autotest_common.sh@10 -- # set +x 00:05:54.069 ************************************ 00:05:54.069 START TEST accel_decomp_mcore 00:05:54.069 ************************************ 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:54.069 [2024-05-16 20:04:41.009026] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:54.069 [2024-05-16 20:04:41.009063] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1661029 ] 00:05:54.069 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.069 [2024-05-16 20:04:41.059591] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:54.069 [2024-05-16 20:04:41.138222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.069 [2024-05-16 20:04:41.138318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:54.069 [2024-05-16 20:04:41.138405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:54.069 [2024-05-16 20:04:41.138407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.069 20:04:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.444 20:04:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:55.444 20:04:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.444 20:04:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.444 20:04:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.444 20:04:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:55.444 20:04:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.444 20:04:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.444 20:04:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.444 20:04:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:55.444 20:04:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.444 20:04:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.444 20:04:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.444 20:04:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:55.444 20:04:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.444 20:04:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.444 20:04:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.444 20:04:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:55.444 20:04:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.444 20:04:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.444 20:04:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.444 20:04:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:55.444 20:04:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.444 20:04:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.444 20:04:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.444 20:04:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:55.444 20:04:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.444 20:04:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.444 20:04:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.444 20:04:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:55.444 20:04:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.444 20:04:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.444 20:04:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.445 20:04:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:55.445 20:04:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.445 20:04:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.445 20:04:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.445 20:04:42 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:55.445 20:04:42 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:55.445 20:04:42 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:55.445 00:05:55.445 real 0m1.345s 00:05:55.445 user 0m4.607s 00:05:55.445 sys 0m0.127s 00:05:55.445 20:04:42 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:55.445 20:04:42 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:55.445 ************************************ 00:05:55.445 END TEST accel_decomp_mcore 00:05:55.445 ************************************ 00:05:55.445 20:04:42 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:55.445 20:04:42 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:05:55.445 20:04:42 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:55.445 20:04:42 accel -- common/autotest_common.sh@10 -- # set +x 00:05:55.445 ************************************ 00:05:55.445 START TEST accel_decomp_full_mcore 00:05:55.445 ************************************ 00:05:55.445 20:04:42 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:55.445 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:55.445 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:55.445 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:55.445 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.445 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:55.445 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.445 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:55.445 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:55.445 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:55.445 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:55.445 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.445 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:55.445 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:55.445 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:55.445 [2024-05-16 20:04:42.422662] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:55.445 [2024-05-16 20:04:42.422700] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1661275 ] 00:05:55.445 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.445 [2024-05-16 20:04:42.473831] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:55.445 [2024-05-16 20:04:42.552299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.445 [2024-05-16 20:04:42.552395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:55.445 [2024-05-16 20:04:42.552482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:55.445 [2024-05-16 20:04:42.552484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.704 20:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.640 20:04:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:56.640 20:04:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.640 20:04:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.640 20:04:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.640 20:04:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:56.640 20:04:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.640 20:04:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.640 20:04:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.640 20:04:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:56.640 20:04:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.640 20:04:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.640 20:04:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.640 20:04:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:56.640 20:04:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.640 20:04:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.640 20:04:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.640 20:04:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:56.640 20:04:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.640 20:04:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.640 20:04:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.640 20:04:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:56.640 20:04:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.640 20:04:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.640 20:04:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.640 20:04:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:56.640 20:04:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.640 20:04:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.640 20:04:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.640 20:04:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:56.640 20:04:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.640 20:04:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.640 20:04:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.640 20:04:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:56.640 20:04:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.640 20:04:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.640 20:04:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.640 20:04:43 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:56.640 20:04:43 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:56.640 20:04:43 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:56.640 00:05:56.640 real 0m1.354s 00:05:56.640 user 0m4.648s 00:05:56.640 sys 0m0.121s 00:05:56.640 20:04:43 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:56.640 20:04:43 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:56.640 ************************************ 00:05:56.640 END TEST accel_decomp_full_mcore 00:05:56.640 ************************************ 00:05:56.900 20:04:43 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:56.900 20:04:43 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:05:56.900 20:04:43 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:56.900 20:04:43 accel -- common/autotest_common.sh@10 -- # set +x 00:05:56.900 ************************************ 00:05:56.900 START TEST accel_decomp_mthread 00:05:56.900 ************************************ 00:05:56.900 20:04:43 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:56.900 20:04:43 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:56.900 20:04:43 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:56.900 20:04:43 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:56.900 20:04:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.900 20:04:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.900 20:04:43 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:56.900 20:04:43 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:56.900 20:04:43 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:56.900 20:04:43 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:56.900 20:04:43 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.900 20:04:43 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.900 20:04:43 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:56.900 20:04:43 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:56.900 20:04:43 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:56.900 [2024-05-16 20:04:43.849902] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:56.900 [2024-05-16 20:04:43.849941] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1661519 ] 00:05:56.900 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.900 [2024-05-16 20:04:43.901092] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.900 [2024-05-16 20:04:43.977093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.900 20:04:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.277 20:04:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:58.277 20:04:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.277 20:04:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.277 20:04:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.277 20:04:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:58.277 20:04:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.277 20:04:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.277 20:04:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.277 20:04:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:58.277 20:04:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.277 20:04:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.277 20:04:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.277 20:04:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:58.277 20:04:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.277 20:04:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.277 20:04:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.277 20:04:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:58.277 20:04:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.277 20:04:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.277 20:04:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.277 20:04:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:58.277 20:04:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.277 20:04:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.277 20:04:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.277 20:04:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:58.277 20:04:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.277 20:04:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.277 20:04:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.277 20:04:45 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:58.277 20:04:45 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:58.277 20:04:45 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:58.277 00:05:58.277 real 0m1.332s 00:05:58.277 user 0m1.233s 00:05:58.277 sys 0m0.115s 00:05:58.277 20:04:45 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:58.277 20:04:45 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:58.277 ************************************ 00:05:58.277 END TEST accel_decomp_mthread 00:05:58.277 ************************************ 00:05:58.277 20:04:45 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:58.277 20:04:45 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:05:58.277 20:04:45 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:58.277 20:04:45 accel -- common/autotest_common.sh@10 -- # set +x 00:05:58.277 ************************************ 00:05:58.277 START TEST accel_decomp_full_mthread 00:05:58.277 ************************************ 00:05:58.277 20:04:45 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:58.277 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:58.277 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:58.277 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.277 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.277 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:58.277 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:58.277 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:58.277 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:58.277 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:58.277 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:58.277 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:58.277 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:58.277 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:58.277 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:58.277 [2024-05-16 20:04:45.248820] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:58.277 [2024-05-16 20:04:45.248899] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1661767 ] 00:05:58.277 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.277 [2024-05-16 20:04:45.306924] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.277 [2024-05-16 20:04:45.391494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.537 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:58.537 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.537 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.537 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.537 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:58.537 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.537 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.537 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.537 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:58.537 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.537 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.537 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.537 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:58.537 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.537 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.537 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.537 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:58.537 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.537 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.537 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.537 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:58.537 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.537 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.537 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.537 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:58.537 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.537 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:58.537 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.537 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.537 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:58.537 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.537 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.537 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.537 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:58.537 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.537 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.537 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.537 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:05:58.537 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.537 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:58.537 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.537 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.537 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:05:58.537 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.537 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.537 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.537 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:58.538 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.538 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.538 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.538 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:58.538 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.538 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.538 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.538 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:05:58.538 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.538 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.538 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.538 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:58.538 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.538 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.538 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.538 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:58.538 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.538 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.538 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.538 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:58.538 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.538 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.538 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.538 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:58.538 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.538 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.538 20:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.474 20:04:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:59.474 20:04:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.474 20:04:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.474 20:04:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.474 20:04:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:59.474 20:04:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.474 20:04:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.474 20:04:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.474 20:04:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:59.474 20:04:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.474 20:04:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.474 20:04:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.474 20:04:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:59.474 20:04:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.474 20:04:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.474 20:04:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.474 20:04:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:59.474 20:04:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.474 20:04:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.474 20:04:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.474 20:04:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:59.474 20:04:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.474 20:04:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.474 20:04:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.474 20:04:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:59.474 20:04:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.474 20:04:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.474 20:04:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.474 20:04:46 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:59.474 20:04:46 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:59.474 20:04:46 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:59.474 00:05:59.474 real 0m1.364s 00:05:59.474 user 0m1.256s 00:05:59.474 sys 0m0.120s 00:05:59.474 20:04:46 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:59.474 20:04:46 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:59.474 ************************************ 00:05:59.474 END TEST accel_decomp_full_mthread 00:05:59.474 ************************************ 00:05:59.734 20:04:46 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:05:59.734 20:04:46 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:59.734 20:04:46 accel -- accel/accel.sh@137 -- # build_accel_config 00:05:59.734 20:04:46 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:05:59.734 20:04:46 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:59.734 20:04:46 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:59.734 20:04:46 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:59.734 20:04:46 accel -- common/autotest_common.sh@10 -- # set +x 00:05:59.734 20:04:46 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:59.734 20:04:46 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:59.734 20:04:46 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:59.734 20:04:46 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:59.734 20:04:46 accel -- accel/accel.sh@41 -- # jq -r . 00:05:59.734 ************************************ 00:05:59.734 START TEST accel_dif_functional_tests 00:05:59.734 ************************************ 00:05:59.734 20:04:46 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:59.734 [2024-05-16 20:04:46.682450] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:59.734 [2024-05-16 20:04:46.682531] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1662079 ] 00:05:59.734 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.734 [2024-05-16 20:04:46.739654] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:59.734 [2024-05-16 20:04:46.817507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.734 [2024-05-16 20:04:46.817604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:59.734 [2024-05-16 20:04:46.817606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.994 00:05:59.994 00:05:59.994 CUnit - A unit testing framework for C - Version 2.1-3 00:05:59.994 http://cunit.sourceforge.net/ 00:05:59.994 00:05:59.994 00:05:59.994 Suite: accel_dif 00:05:59.994 Test: verify: DIF generated, GUARD check ...passed 00:05:59.994 Test: verify: DIF generated, APPTAG check ...passed 00:05:59.994 Test: verify: DIF generated, REFTAG check ...passed 00:05:59.994 Test: verify: DIF not generated, GUARD check ...[2024-05-16 20:04:46.894204] dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:59.994 passed 00:05:59.994 Test: verify: DIF not generated, APPTAG check ...[2024-05-16 20:04:46.894248] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:59.994 passed 00:05:59.994 Test: verify: DIF not generated, REFTAG check ...[2024-05-16 20:04:46.894269] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:59.994 passed 00:05:59.994 Test: verify: APPTAG correct, APPTAG check ...passed 00:05:59.994 Test: verify: APPTAG incorrect, APPTAG check ...[2024-05-16 20:04:46.894313] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:05:59.994 passed 00:05:59.994 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:05:59.994 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:05:59.994 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:05:59.994 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-05-16 20:04:46.894422] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:05:59.994 passed 00:05:59.994 Test: verify copy: DIF generated, GUARD check ...passed 00:05:59.994 Test: verify copy: DIF generated, APPTAG check ...passed 00:05:59.994 Test: verify copy: DIF generated, REFTAG check ...passed 00:05:59.994 Test: verify copy: DIF not generated, GUARD check ...[2024-05-16 20:04:46.894531] dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:59.994 passed 00:05:59.994 Test: verify copy: DIF not generated, APPTAG check ...[2024-05-16 20:04:46.894555] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:59.994 passed 00:05:59.994 Test: verify copy: DIF not generated, REFTAG check ...[2024-05-16 20:04:46.894581] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:59.994 passed 00:05:59.994 Test: generate copy: DIF generated, GUARD check ...passed 00:05:59.994 Test: generate copy: DIF generated, APTTAG check ...passed 00:05:59.994 Test: generate copy: DIF generated, REFTAG check ...passed 00:05:59.994 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:05:59.994 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:05:59.994 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:05:59.994 Test: generate copy: iovecs-len validate ...[2024-05-16 20:04:46.894965] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:05:59.994 passed 00:05:59.994 Test: generate copy: buffer alignment validate ...passed 00:05:59.994 00:05:59.994 Run Summary: Type Total Ran Passed Failed Inactive 00:05:59.994 suites 1 1 n/a 0 0 00:05:59.994 tests 26 26 26 0 0 00:05:59.994 asserts 115 115 115 0 n/a 00:05:59.994 00:05:59.994 Elapsed time = 0.002 seconds 00:05:59.994 00:05:59.994 real 0m0.410s 00:05:59.994 user 0m0.657s 00:05:59.994 sys 0m0.151s 00:05:59.994 20:04:47 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:59.994 20:04:47 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:05:59.994 ************************************ 00:05:59.994 END TEST accel_dif_functional_tests 00:05:59.994 ************************************ 00:05:59.994 00:05:59.994 real 0m31.039s 00:05:59.994 user 0m34.789s 00:05:59.994 sys 0m4.346s 00:05:59.994 20:04:47 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:59.994 20:04:47 accel -- common/autotest_common.sh@10 -- # set +x 00:05:59.994 ************************************ 00:05:59.994 END TEST accel 00:05:59.994 ************************************ 00:06:00.254 20:04:47 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:00.254 20:04:47 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:00.254 20:04:47 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:00.254 20:04:47 -- common/autotest_common.sh@10 -- # set +x 00:06:00.254 ************************************ 00:06:00.254 START TEST accel_rpc 00:06:00.254 ************************************ 00:06:00.254 20:04:47 accel_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:00.254 * Looking for test storage... 00:06:00.254 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel 00:06:00.254 20:04:47 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:00.254 20:04:47 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1662270 00:06:00.254 20:04:47 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 1662270 00:06:00.254 20:04:47 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 1662270 ']' 00:06:00.254 20:04:47 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.254 20:04:47 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:00.254 20:04:47 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:00.254 20:04:47 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.254 20:04:47 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:00.254 20:04:47 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.254 [2024-05-16 20:04:47.283715] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:00.254 [2024-05-16 20:04:47.283782] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1662270 ] 00:06:00.254 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.254 [2024-05-16 20:04:47.336881] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.513 [2024-05-16 20:04:47.419858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.513 20:04:47 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:00.513 20:04:47 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:00.513 20:04:47 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:00.513 20:04:47 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:00.513 20:04:47 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:00.513 20:04:47 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:00.513 20:04:47 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:00.513 20:04:47 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:00.513 20:04:47 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:00.513 20:04:47 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.513 ************************************ 00:06:00.513 START TEST accel_assign_opcode 00:06:00.513 ************************************ 00:06:00.513 20:04:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:06:00.513 20:04:47 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:00.513 20:04:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.513 20:04:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:00.513 [2024-05-16 20:04:47.488321] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:00.513 20:04:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:00.513 20:04:47 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:00.513 20:04:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.514 20:04:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:00.514 [2024-05-16 20:04:47.496332] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:00.514 20:04:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:00.514 20:04:47 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:00.514 20:04:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.514 20:04:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:00.773 20:04:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:00.773 20:04:47 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:00.773 20:04:47 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:00.773 20:04:47 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:00.773 20:04:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.773 20:04:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:00.773 20:04:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:00.773 software 00:06:00.773 00:06:00.773 real 0m0.239s 00:06:00.773 user 0m0.036s 00:06:00.773 sys 0m0.013s 00:06:00.773 20:04:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:00.773 20:04:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:00.773 ************************************ 00:06:00.773 END TEST accel_assign_opcode 00:06:00.773 ************************************ 00:06:00.773 20:04:47 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 1662270 00:06:00.773 20:04:47 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 1662270 ']' 00:06:00.773 20:04:47 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 1662270 00:06:00.773 20:04:47 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:06:00.773 20:04:47 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:00.773 20:04:47 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1662270 00:06:00.773 20:04:47 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:00.773 20:04:47 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:00.773 20:04:47 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1662270' 00:06:00.773 killing process with pid 1662270 00:06:00.773 20:04:47 accel_rpc -- common/autotest_common.sh@965 -- # kill 1662270 00:06:00.773 20:04:47 accel_rpc -- common/autotest_common.sh@970 -- # wait 1662270 00:06:01.032 00:06:01.032 real 0m0.929s 00:06:01.032 user 0m0.838s 00:06:01.032 sys 0m0.416s 00:06:01.032 20:04:48 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:01.032 20:04:48 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.032 ************************************ 00:06:01.032 END TEST accel_rpc 00:06:01.032 ************************************ 00:06:01.032 20:04:48 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:06:01.032 20:04:48 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:01.032 20:04:48 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:01.032 20:04:48 -- common/autotest_common.sh@10 -- # set +x 00:06:01.032 ************************************ 00:06:01.032 START TEST app_cmdline 00:06:01.032 ************************************ 00:06:01.032 20:04:48 app_cmdline -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:06:01.291 * Looking for test storage... 00:06:01.291 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:01.291 20:04:48 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:01.291 20:04:48 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1662538 00:06:01.291 20:04:48 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1662538 00:06:01.291 20:04:48 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:01.291 20:04:48 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 1662538 ']' 00:06:01.292 20:04:48 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.292 20:04:48 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:01.292 20:04:48 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.292 20:04:48 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:01.292 20:04:48 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:01.292 [2024-05-16 20:04:48.275700] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:01.292 [2024-05-16 20:04:48.275773] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1662538 ] 00:06:01.292 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.292 [2024-05-16 20:04:48.329408] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.292 [2024-05-16 20:04:48.412782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.231 20:04:49 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:02.231 20:04:49 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:06:02.231 20:04:49 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:02.231 { 00:06:02.231 "version": "SPDK v24.09-pre git sha1 cf8ec7cfe", 00:06:02.231 "fields": { 00:06:02.231 "major": 24, 00:06:02.231 "minor": 9, 00:06:02.231 "patch": 0, 00:06:02.231 "suffix": "-pre", 00:06:02.231 "commit": "cf8ec7cfe" 00:06:02.231 } 00:06:02.231 } 00:06:02.231 20:04:49 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:02.231 20:04:49 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:02.231 20:04:49 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:02.231 20:04:49 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:02.231 20:04:49 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:02.231 20:04:49 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:02.231 20:04:49 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:02.231 20:04:49 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:02.231 20:04:49 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:02.231 20:04:49 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:02.231 20:04:49 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:02.231 20:04:49 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:02.231 20:04:49 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:02.231 20:04:49 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:02.231 20:04:49 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:02.231 20:04:49 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:02.231 20:04:49 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:02.231 20:04:49 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:02.231 20:04:49 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:02.231 20:04:49 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:02.231 20:04:49 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:02.231 20:04:49 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:02.231 20:04:49 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py ]] 00:06:02.231 20:04:49 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:02.491 request: 00:06:02.491 { 00:06:02.491 "method": "env_dpdk_get_mem_stats", 00:06:02.491 "req_id": 1 00:06:02.491 } 00:06:02.491 Got JSON-RPC error response 00:06:02.491 response: 00:06:02.491 { 00:06:02.491 "code": -32601, 00:06:02.491 "message": "Method not found" 00:06:02.491 } 00:06:02.491 20:04:49 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:02.491 20:04:49 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:02.491 20:04:49 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:02.491 20:04:49 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:02.491 20:04:49 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1662538 00:06:02.491 20:04:49 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 1662538 ']' 00:06:02.491 20:04:49 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 1662538 00:06:02.491 20:04:49 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:06:02.491 20:04:49 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:02.491 20:04:49 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1662538 00:06:02.491 20:04:49 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:02.491 20:04:49 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:02.491 20:04:49 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1662538' 00:06:02.491 killing process with pid 1662538 00:06:02.491 20:04:49 app_cmdline -- common/autotest_common.sh@965 -- # kill 1662538 00:06:02.491 20:04:49 app_cmdline -- common/autotest_common.sh@970 -- # wait 1662538 00:06:02.750 00:06:02.750 real 0m1.675s 00:06:02.750 user 0m1.988s 00:06:02.750 sys 0m0.434s 00:06:02.750 20:04:49 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:02.750 20:04:49 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:02.750 ************************************ 00:06:02.750 END TEST app_cmdline 00:06:02.750 ************************************ 00:06:02.750 20:04:49 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:06:02.750 20:04:49 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:02.750 20:04:49 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:02.750 20:04:49 -- common/autotest_common.sh@10 -- # set +x 00:06:03.009 ************************************ 00:06:03.009 START TEST version 00:06:03.009 ************************************ 00:06:03.009 20:04:49 version -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:06:03.009 * Looking for test storage... 00:06:03.009 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:03.009 20:04:50 version -- app/version.sh@17 -- # get_header_version major 00:06:03.009 20:04:50 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:03.009 20:04:50 version -- app/version.sh@14 -- # cut -f2 00:06:03.009 20:04:50 version -- app/version.sh@14 -- # tr -d '"' 00:06:03.009 20:04:50 version -- app/version.sh@17 -- # major=24 00:06:03.009 20:04:50 version -- app/version.sh@18 -- # get_header_version minor 00:06:03.009 20:04:50 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:03.009 20:04:50 version -- app/version.sh@14 -- # cut -f2 00:06:03.009 20:04:50 version -- app/version.sh@14 -- # tr -d '"' 00:06:03.009 20:04:50 version -- app/version.sh@18 -- # minor=9 00:06:03.009 20:04:50 version -- app/version.sh@19 -- # get_header_version patch 00:06:03.009 20:04:50 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:03.009 20:04:50 version -- app/version.sh@14 -- # cut -f2 00:06:03.009 20:04:50 version -- app/version.sh@14 -- # tr -d '"' 00:06:03.009 20:04:50 version -- app/version.sh@19 -- # patch=0 00:06:03.009 20:04:50 version -- app/version.sh@20 -- # get_header_version suffix 00:06:03.009 20:04:50 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:03.009 20:04:50 version -- app/version.sh@14 -- # cut -f2 00:06:03.010 20:04:50 version -- app/version.sh@14 -- # tr -d '"' 00:06:03.010 20:04:50 version -- app/version.sh@20 -- # suffix=-pre 00:06:03.010 20:04:50 version -- app/version.sh@22 -- # version=24.9 00:06:03.010 20:04:50 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:03.010 20:04:50 version -- app/version.sh@28 -- # version=24.9rc0 00:06:03.010 20:04:50 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:03.010 20:04:50 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:03.010 20:04:50 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:03.010 20:04:50 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:03.010 00:06:03.010 real 0m0.147s 00:06:03.010 user 0m0.081s 00:06:03.010 sys 0m0.104s 00:06:03.010 20:04:50 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:03.010 20:04:50 version -- common/autotest_common.sh@10 -- # set +x 00:06:03.010 ************************************ 00:06:03.010 END TEST version 00:06:03.010 ************************************ 00:06:03.010 20:04:50 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:03.010 20:04:50 -- spdk/autotest.sh@198 -- # uname -s 00:06:03.010 20:04:50 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:03.010 20:04:50 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:03.010 20:04:50 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:03.010 20:04:50 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:06:03.010 20:04:50 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:03.010 20:04:50 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:03.010 20:04:50 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:03.010 20:04:50 -- common/autotest_common.sh@10 -- # set +x 00:06:03.010 20:04:50 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:03.010 20:04:50 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:03.010 20:04:50 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:06:03.010 20:04:50 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:06:03.010 20:04:50 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:06:03.010 20:04:50 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:06:03.010 20:04:50 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:06:03.010 20:04:50 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:06:03.010 20:04:50 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:06:03.010 20:04:50 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:06:03.010 20:04:50 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:06:03.010 20:04:50 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:06:03.010 20:04:50 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:06:03.010 20:04:50 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:06:03.010 20:04:50 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:06:03.010 20:04:50 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:06:03.010 20:04:50 -- spdk/autotest.sh@371 -- # [[ 1 -eq 1 ]] 00:06:03.010 20:04:50 -- spdk/autotest.sh@372 -- # run_test llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:06:03.010 20:04:50 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:03.010 20:04:50 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:03.010 20:04:50 -- common/autotest_common.sh@10 -- # set +x 00:06:03.270 ************************************ 00:06:03.270 START TEST llvm_fuzz 00:06:03.270 ************************************ 00:06:03.270 20:04:50 llvm_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:06:03.270 * Looking for test storage... 00:06:03.271 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz 00:06:03.271 20:04:50 llvm_fuzz -- fuzz/llvm.sh@11 -- # fuzzers=($(get_fuzzer_targets)) 00:06:03.271 20:04:50 llvm_fuzz -- fuzz/llvm.sh@11 -- # get_fuzzer_targets 00:06:03.271 20:04:50 llvm_fuzz -- common/autotest_common.sh@546 -- # fuzzers=() 00:06:03.271 20:04:50 llvm_fuzz -- common/autotest_common.sh@546 -- # local fuzzers 00:06:03.271 20:04:50 llvm_fuzz -- common/autotest_common.sh@548 -- # [[ -n '' ]] 00:06:03.271 20:04:50 llvm_fuzz -- common/autotest_common.sh@551 -- # fuzzers=("$rootdir/test/fuzz/llvm/"*) 00:06:03.271 20:04:50 llvm_fuzz -- common/autotest_common.sh@552 -- # fuzzers=("${fuzzers[@]##*/}") 00:06:03.271 20:04:50 llvm_fuzz -- common/autotest_common.sh@555 -- # echo 'common.sh llvm-gcov.sh nvmf vfio' 00:06:03.271 20:04:50 llvm_fuzz -- fuzz/llvm.sh@13 -- # llvm_out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:06:03.271 20:04:50 llvm_fuzz -- fuzz/llvm.sh@15 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/coverage 00:06:03.271 20:04:50 llvm_fuzz -- fuzz/llvm.sh@56 -- # [[ 1 -eq 0 ]] 00:06:03.271 20:04:50 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:06:03.271 20:04:50 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:06:03.271 20:04:50 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:06:03.271 20:04:50 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:06:03.271 20:04:50 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:06:03.271 20:04:50 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:06:03.271 20:04:50 llvm_fuzz -- fuzz/llvm.sh@62 -- # run_test nvmf_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:06:03.271 20:04:50 llvm_fuzz -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:03.271 20:04:50 llvm_fuzz -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:03.271 20:04:50 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:06:03.271 ************************************ 00:06:03.271 START TEST nvmf_fuzz 00:06:03.271 ************************************ 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:06:03.271 * Looking for test storage... 00:06:03.271 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@60 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@34 -- # set -e 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=/usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER=y 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=n 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@83 -- # CONFIG_URING=n 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:06:03.271 20:04:50 llvm_fuzz.nvmf_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:03.272 20:04:50 llvm_fuzz.nvmf_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:03.272 20:04:50 llvm_fuzz.nvmf_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:03.272 20:04:50 llvm_fuzz.nvmf_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:03.272 20:04:50 llvm_fuzz.nvmf_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:03.272 20:04:50 llvm_fuzz.nvmf_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:03.272 20:04:50 llvm_fuzz.nvmf_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:03.272 20:04:50 llvm_fuzz.nvmf_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:03.272 20:04:50 llvm_fuzz.nvmf_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:03.272 20:04:50 llvm_fuzz.nvmf_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:06:03.272 20:04:50 llvm_fuzz.nvmf_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:03.272 #define SPDK_CONFIG_H 00:06:03.272 #define SPDK_CONFIG_APPS 1 00:06:03.272 #define SPDK_CONFIG_ARCH native 00:06:03.272 #undef SPDK_CONFIG_ASAN 00:06:03.272 #undef SPDK_CONFIG_AVAHI 00:06:03.272 #undef SPDK_CONFIG_CET 00:06:03.272 #define SPDK_CONFIG_COVERAGE 1 00:06:03.272 #define SPDK_CONFIG_CROSS_PREFIX 00:06:03.272 #undef SPDK_CONFIG_CRYPTO 00:06:03.272 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:03.272 #undef SPDK_CONFIG_CUSTOMOCF 00:06:03.272 #undef SPDK_CONFIG_DAOS 00:06:03.272 #define SPDK_CONFIG_DAOS_DIR 00:06:03.272 #define SPDK_CONFIG_DEBUG 1 00:06:03.272 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:03.272 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:06:03.272 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:03.272 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:03.272 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:03.272 #undef SPDK_CONFIG_DPDK_UADK 00:06:03.272 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:06:03.272 #define SPDK_CONFIG_EXAMPLES 1 00:06:03.272 #undef SPDK_CONFIG_FC 00:06:03.272 #define SPDK_CONFIG_FC_PATH 00:06:03.272 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:03.272 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:03.272 #undef SPDK_CONFIG_FUSE 00:06:03.272 #define SPDK_CONFIG_FUZZER 1 00:06:03.272 #define SPDK_CONFIG_FUZZER_LIB /usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:06:03.272 #undef SPDK_CONFIG_GOLANG 00:06:03.272 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:03.272 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:03.272 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:03.272 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:06:03.272 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:03.272 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:03.272 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:03.272 #define SPDK_CONFIG_IDXD 1 00:06:03.272 #undef SPDK_CONFIG_IDXD_KERNEL 00:06:03.272 #undef SPDK_CONFIG_IPSEC_MB 00:06:03.272 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:03.272 #define SPDK_CONFIG_ISAL 1 00:06:03.272 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:03.272 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:03.272 #define SPDK_CONFIG_LIBDIR 00:06:03.272 #undef SPDK_CONFIG_LTO 00:06:03.272 #define SPDK_CONFIG_MAX_LCORES 00:06:03.272 #define SPDK_CONFIG_NVME_CUSE 1 00:06:03.272 #undef SPDK_CONFIG_OCF 00:06:03.272 #define SPDK_CONFIG_OCF_PATH 00:06:03.272 #define SPDK_CONFIG_OPENSSL_PATH 00:06:03.272 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:03.272 #define SPDK_CONFIG_PGO_DIR 00:06:03.272 #undef SPDK_CONFIG_PGO_USE 00:06:03.272 #define SPDK_CONFIG_PREFIX /usr/local 00:06:03.272 #undef SPDK_CONFIG_RAID5F 00:06:03.272 #undef SPDK_CONFIG_RBD 00:06:03.272 #define SPDK_CONFIG_RDMA 1 00:06:03.272 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:03.272 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:03.272 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:03.272 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:03.272 #undef SPDK_CONFIG_SHARED 00:06:03.272 #undef SPDK_CONFIG_SMA 00:06:03.272 #define SPDK_CONFIG_TESTS 1 00:06:03.272 #undef SPDK_CONFIG_TSAN 00:06:03.272 #define SPDK_CONFIG_UBLK 1 00:06:03.272 #define SPDK_CONFIG_UBSAN 1 00:06:03.272 #undef SPDK_CONFIG_UNIT_TESTS 00:06:03.272 #undef SPDK_CONFIG_URING 00:06:03.272 #define SPDK_CONFIG_URING_PATH 00:06:03.272 #undef SPDK_CONFIG_URING_ZNS 00:06:03.272 #undef SPDK_CONFIG_USDT 00:06:03.272 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:03.272 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:03.272 #define SPDK_CONFIG_VFIO_USER 1 00:06:03.272 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:03.272 #define SPDK_CONFIG_VHOST 1 00:06:03.272 #define SPDK_CONFIG_VIRTIO 1 00:06:03.272 #undef SPDK_CONFIG_VTUNE 00:06:03.272 #define SPDK_CONFIG_VTUNE_DIR 00:06:03.272 #define SPDK_CONFIG_WERROR 1 00:06:03.272 #define SPDK_CONFIG_WPDK_DIR 00:06:03.272 #undef SPDK_CONFIG_XNVME 00:06:03.272 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:03.272 20:04:50 llvm_fuzz.nvmf_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:03.272 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:06:03.272 20:04:50 llvm_fuzz.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:03.272 20:04:50 llvm_fuzz.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:03.272 20:04:50 llvm_fuzz.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:03.272 20:04:50 llvm_fuzz.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.272 20:04:50 llvm_fuzz.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.272 20:04:50 llvm_fuzz.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.272 20:04:50 llvm_fuzz.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:06:03.272 20:04:50 llvm_fuzz.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.272 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:06:03.272 20:04:50 llvm_fuzz.nvmf_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:06:03.272 20:04:50 llvm_fuzz.nvmf_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:06:03.272 20:04:50 llvm_fuzz.nvmf_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:06:03.272 20:04:50 llvm_fuzz.nvmf_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:03.272 20:04:50 llvm_fuzz.nvmf_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:06:03.272 20:04:50 llvm_fuzz.nvmf_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:06:03.272 20:04:50 llvm_fuzz.nvmf_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:06:03.272 20:04:50 llvm_fuzz.nvmf_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- pm/common@68 -- # uname -s 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- pm/common@68 -- # PM_OS=Linux 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- pm/common@76 -- # SUDO[0]= 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@57 -- # : 0 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@61 -- # : 0 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@63 -- # : 0 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@65 -- # : 1 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@67 -- # : 0 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@69 -- # : 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@71 -- # : 0 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@73 -- # : 0 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@75 -- # : 0 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@77 -- # : 0 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@79 -- # : 0 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@81 -- # : 0 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@83 -- # : 0 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@85 -- # : 0 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@87 -- # : 0 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@89 -- # : 0 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@91 -- # : 0 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@93 -- # : 0 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@95 -- # : 0 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@97 -- # : 1 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@99 -- # : 1 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@101 -- # : rdma 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@103 -- # : 0 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@105 -- # : 0 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@107 -- # : 0 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@109 -- # : 0 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@111 -- # : 0 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@113 -- # : 0 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@115 -- # : 0 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@117 -- # : 0 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@119 -- # : 0 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@121 -- # : 1 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@123 -- # : 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@125 -- # : 0 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@127 -- # : 0 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@129 -- # : 0 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@131 -- # : 0 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@133 -- # : 0 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@135 -- # : 0 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@137 -- # : 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@139 -- # : true 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@141 -- # : 0 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@143 -- # : 0 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@145 -- # : 0 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@147 -- # : 0 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@149 -- # : 0 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@151 -- # : 0 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@153 -- # : 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@155 -- # : 0 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@157 -- # : 0 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@159 -- # : 0 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@161 -- # : 0 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@163 -- # : 0 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@166 -- # : 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@168 -- # : 0 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:06:03.534 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@170 -- # : 0 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@199 -- # cat 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@255 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@255 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@262 -- # export valgrind= 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@262 -- # valgrind= 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@268 -- # uname -s 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@278 -- # MAKE=make 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j88 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@298 -- # TEST_MODE= 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@317 -- # [[ -z 1662948 ]] 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@317 -- # kill -0 1662948 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@330 -- # local mount target_dir 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.Tt1Bhn 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@354 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf /tmp/spdk.Tt1Bhn/tests/nvmf /tmp/spdk.Tt1Bhn 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@326 -- # df -T 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_devtmpfs 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=67108864 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=67108864 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_root 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=overlay 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=91042148352 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=99792764928 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=8750616576 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=49891672064 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=49896382464 00:06:03.535 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=4710400 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=19952656384 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=19958554624 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=5898240 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=49895632896 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=49896382464 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=749568 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=9979269120 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=9979273216 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:06:03.536 * Looking for test storage... 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@367 -- # local target_space new_size 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@371 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@371 -- # mount=/ 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@373 -- # target_space=91042148352 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@379 -- # [[ overlay == tmpfs ]] 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@379 -- # [[ overlay == ramfs ]] 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@380 -- # new_size=10965209088 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:03.536 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@388 -- # return 0 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1678 -- # set -o errtrace 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1683 -- # true 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1685 -- # xtrace_fd 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@27 -- # exec 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@29 -- # exec 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@18 -- # set -x 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@61 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/../common.sh 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- ../common.sh@8 -- # pids=() 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@63 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@64 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@64 -- # fuzz_num=25 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@65 -- # (( fuzz_num != 0 )) 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@67 -- # trap 'cleanup /tmp/llvm_fuzz* /var/tmp/suppress_nvmf_fuzz; exit 1' SIGINT SIGTERM EXIT 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@69 -- # mem_size=512 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@70 -- # [[ 1 -eq 1 ]] 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@71 -- # start_llvm_fuzz_short 25 1 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- ../common.sh@69 -- # local fuzz_num=25 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- ../common.sh@70 -- # local time=1 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=0 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_0.conf 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 0 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4400 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4400"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:03.536 20:04:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' -c /tmp/fuzz_json_0.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 -Z 0 00:06:03.536 [2024-05-16 20:04:50.553615] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:03.536 [2024-05-16 20:04:50.553694] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1662994 ] 00:06:03.536 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.796 [2024-05-16 20:04:50.747854] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.796 [2024-05-16 20:04:50.812973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.796 [2024-05-16 20:04:50.872221] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:03.796 [2024-05-16 20:04:50.888186] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:03.796 [2024-05-16 20:04:50.888542] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4400 *** 00:06:03.796 INFO: Running with entropic power schedule (0xFF, 100). 00:06:03.796 INFO: Seed: 1536641695 00:06:03.796 INFO: Loaded 1 modules (357283 inline 8-bit counters): 357283 [0x299c0cc, 0x29f346f), 00:06:03.796 INFO: Loaded 1 PC tables (357283 PCs): 357283 [0x29f3470,0x2f66ea0), 00:06:03.796 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:03.796 INFO: A corpus is not provided, starting from an empty corpus 00:06:03.796 #2 INITED exec/s: 0 rss: 64Mb 00:06:03.796 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:03.796 This may also happen if the target rejected all inputs we tried so far 00:06:03.796 [2024-05-16 20:04:50.933832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:03.796 [2024-05-16 20:04:50.933860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.055 NEW_FUNC[1/686]: 0x482d20 in fuzz_admin_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:47 00:06:04.055 NEW_FUNC[2/686]: 0x4bf420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:04.055 #7 NEW cov: 11832 ft: 11834 corp: 2/126b lim: 320 exec/s: 0 rss: 71Mb L: 125/125 MS: 5 ChangeBit-CopyPart-EraseBytes-CrossOver-InsertRepeatedBytes- 00:06:04.055 [2024-05-16 20:04:51.074198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:04.055 [2024-05-16 20:04:51.074230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.055 #11 NEW cov: 11975 ft: 12389 corp: 3/253b lim: 320 exec/s: 0 rss: 71Mb L: 127/127 MS: 4 ChangeBit-ShuffleBytes-InsertByte-CrossOver- 00:06:04.055 [2024-05-16 20:04:51.114210] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:04.055 [2024-05-16 20:04:51.114237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.055 #12 NEW cov: 11981 ft: 12512 corp: 4/380b lim: 320 exec/s: 0 rss: 71Mb L: 127/127 MS: 1 ChangeBit- 00:06:04.055 [2024-05-16 20:04:51.164333] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:04.055 [2024-05-16 20:04:51.164358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.055 #13 NEW cov: 12066 ft: 12895 corp: 5/507b lim: 320 exec/s: 0 rss: 71Mb L: 127/127 MS: 1 ShuffleBytes- 00:06:04.315 [2024-05-16 20:04:51.204584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (44) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:04.315 [2024-05-16 20:04:51.204609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.315 [2024-05-16 20:04:51.204660] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:04.315 [2024-05-16 20:04:51.204671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:04.315 #17 NEW cov: 12068 ft: 13163 corp: 6/635b lim: 320 exec/s: 0 rss: 71Mb L: 128/128 MS: 4 ChangeBit-InsertByte-EraseBytes-CrossOver- 00:06:04.315 [2024-05-16 20:04:51.244666] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (44) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:04.315 [2024-05-16 20:04:51.244691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.315 [2024-05-16 20:04:51.244742] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:04.315 [2024-05-16 20:04:51.244753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:04.315 #18 NEW cov: 12068 ft: 13245 corp: 7/771b lim: 320 exec/s: 0 rss: 72Mb L: 136/136 MS: 1 CMP- DE: "\000\006\245Pi\221\334\242"- 00:06:04.315 [2024-05-16 20:04:51.294807] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (44) qid:0 cid:4 nsid:0 cdw10:00002000 cdw11:00000000 00:06:04.315 [2024-05-16 20:04:51.294832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.315 [2024-05-16 20:04:51.294884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:04.315 [2024-05-16 20:04:51.294895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:04.315 #19 NEW cov: 12068 ft: 13294 corp: 8/907b lim: 320 exec/s: 0 rss: 72Mb L: 136/136 MS: 1 ChangeBit- 00:06:04.315 [2024-05-16 20:04:51.344855] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (41) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.315 [2024-05-16 20:04:51.344880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.315 NEW_FUNC[1/1]: 0x17b1630 in nvme_get_sgl_unkeyed /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:143 00:06:04.315 #20 NEW cov: 12081 ft: 13713 corp: 9/1034b lim: 320 exec/s: 0 rss: 72Mb L: 127/136 MS: 1 ChangeByte- 00:06:04.315 [2024-05-16 20:04:51.395257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:04.315 [2024-05-16 20:04:51.395281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.315 [2024-05-16 20:04:51.395334] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:04.315 [2024-05-16 20:04:51.395345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:04.315 [2024-05-16 20:04:51.395396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:04.315 [2024-05-16 20:04:51.395407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:04.315 #21 NEW cov: 12081 ft: 13944 corp: 10/1241b lim: 320 exec/s: 0 rss: 72Mb L: 207/207 MS: 1 CrossOver- 00:06:04.315 [2024-05-16 20:04:51.445152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:04.315 [2024-05-16 20:04:51.445176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.575 #22 NEW cov: 12081 ft: 14006 corp: 11/1366b lim: 320 exec/s: 0 rss: 72Mb L: 125/207 MS: 1 CopyPart- 00:06:04.575 [2024-05-16 20:04:51.485266] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2a) qid:0 cid:4 nsid:ff000000 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:04.575 [2024-05-16 20:04:51.485291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.575 #23 NEW cov: 12081 ft: 14078 corp: 12/1493b lim: 320 exec/s: 0 rss: 72Mb L: 127/207 MS: 1 ChangeBinInt- 00:06:04.575 [2024-05-16 20:04:51.525382] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (c4) qid:0 cid:4 nsid:c4c4c4c4 cdw10:c4c4c4c4 cdw11:c4c4c4c4 00:06:04.575 [2024-05-16 20:04:51.525406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.575 #24 NEW cov: 12081 ft: 14096 corp: 13/1591b lim: 320 exec/s: 0 rss: 72Mb L: 98/207 MS: 1 InsertRepeatedBytes- 00:06:04.575 [2024-05-16 20:04:51.565601] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (44) qid:0 cid:4 nsid:0 cdw10:00002000 cdw11:00000000 00:06:04.575 [2024-05-16 20:04:51.565625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.575 [2024-05-16 20:04:51.565678] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:04.575 [2024-05-16 20:04:51.565689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:04.575 #25 NEW cov: 12081 ft: 14110 corp: 14/1727b lim: 320 exec/s: 0 rss: 72Mb L: 136/207 MS: 1 ChangeBinInt- 00:06:04.575 [2024-05-16 20:04:51.615633] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:04.575 [2024-05-16 20:04:51.615657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.575 #26 NEW cov: 12081 ft: 14137 corp: 15/1852b lim: 320 exec/s: 0 rss: 72Mb L: 125/207 MS: 1 ChangeByte- 00:06:04.575 [2024-05-16 20:04:51.665757] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:52000000 00:06:04.575 [2024-05-16 20:04:51.665781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.575 #27 NEW cov: 12081 ft: 14171 corp: 16/1977b lim: 320 exec/s: 0 rss: 72Mb L: 125/207 MS: 1 ChangeByte- 00:06:04.575 [2024-05-16 20:04:51.705876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (1f) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:04.575 [2024-05-16 20:04:51.705900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.835 NEW_FUNC[1/1]: 0x1381c90 in nvmf_tcp_req_set_cpl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/tcp.c:2038 00:06:04.835 #28 NEW cov: 12112 ft: 14313 corp: 17/2103b lim: 320 exec/s: 0 rss: 72Mb L: 126/207 MS: 1 InsertByte- 00:06:04.835 [2024-05-16 20:04:51.746011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:04.835 [2024-05-16 20:04:51.746036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.835 #29 NEW cov: 12112 ft: 14338 corp: 18/2211b lim: 320 exec/s: 0 rss: 72Mb L: 108/207 MS: 1 EraseBytes- 00:06:04.835 [2024-05-16 20:04:51.796142] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (44) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:04.835 [2024-05-16 20:04:51.796166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.835 NEW_FUNC[1/1]: 0x1a6ef60 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:04.835 #30 NEW cov: 12135 ft: 14354 corp: 19/2323b lim: 320 exec/s: 0 rss: 72Mb L: 112/207 MS: 1 EraseBytes- 00:06:04.835 [2024-05-16 20:04:51.836288] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (44) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:04.835 [2024-05-16 20:04:51.836312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.835 #31 NEW cov: 12135 ft: 14369 corp: 20/2436b lim: 320 exec/s: 0 rss: 72Mb L: 113/207 MS: 1 EraseBytes- 00:06:04.835 [2024-05-16 20:04:51.876672] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (44) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:04.835 [2024-05-16 20:04:51.876697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.835 [2024-05-16 20:04:51.876747] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:04.835 [2024-05-16 20:04:51.876757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:04.835 [2024-05-16 20:04:51.876808] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:04.835 [2024-05-16 20:04:51.876819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:04.835 #32 NEW cov: 12135 ft: 14483 corp: 21/2651b lim: 320 exec/s: 0 rss: 72Mb L: 215/215 MS: 1 CopyPart- 00:06:04.835 [2024-05-16 20:04:51.916661] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:04.835 [2024-05-16 20:04:51.916686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.835 [2024-05-16 20:04:51.916739] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:04.835 [2024-05-16 20:04:51.916750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:04.835 #33 NEW cov: 12135 ft: 14495 corp: 22/2784b lim: 320 exec/s: 33 rss: 72Mb L: 133/215 MS: 1 CrossOver- 00:06:04.835 [2024-05-16 20:04:51.956813] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:04.835 [2024-05-16 20:04:51.956837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.835 [2024-05-16 20:04:51.956894] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00004000 cdw11:00000000 00:06:04.835 [2024-05-16 20:04:51.956905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:04.835 #34 NEW cov: 12135 ft: 14510 corp: 23/2912b lim: 320 exec/s: 34 rss: 73Mb L: 128/215 MS: 1 InsertByte- 00:06:05.094 [2024-05-16 20:04:51.996910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (44) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:49ad3202 00:06:05.094 [2024-05-16 20:04:51.996934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.094 [2024-05-16 20:04:51.996986] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:05.094 [2024-05-16 20:04:51.996997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:05.094 #35 NEW cov: 12135 ft: 14516 corp: 24/3048b lim: 320 exec/s: 35 rss: 73Mb L: 136/215 MS: 1 CMP- DE: "\001\000\000\000\0022\255I"- 00:06:05.095 [2024-05-16 20:04:52.046993] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.095 [2024-05-16 20:04:52.047019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.095 [2024-05-16 20:04:52.047070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:05.095 [2024-05-16 20:04:52.047081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:05.095 #36 NEW cov: 12135 ft: 14540 corp: 25/3215b lim: 320 exec/s: 36 rss: 73Mb L: 167/215 MS: 1 CopyPart- 00:06:05.095 [2024-05-16 20:04:52.097040] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.095 [2024-05-16 20:04:52.097066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.095 #37 NEW cov: 12135 ft: 14548 corp: 26/3342b lim: 320 exec/s: 37 rss: 73Mb L: 127/215 MS: 1 CrossOver- 00:06:05.095 [2024-05-16 20:04:52.137191] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0xff0000 00:06:05.095 [2024-05-16 20:04:52.137215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.095 #38 NEW cov: 12135 ft: 14563 corp: 27/3469b lim: 320 exec/s: 38 rss: 73Mb L: 127/215 MS: 1 ChangeBinInt- 00:06:05.095 [2024-05-16 20:04:52.177370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.095 [2024-05-16 20:04:52.177394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.095 [2024-05-16 20:04:52.177446] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:05.095 [2024-05-16 20:04:52.177463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:05.095 #39 NEW cov: 12135 ft: 14589 corp: 28/3600b lim: 320 exec/s: 39 rss: 73Mb L: 131/215 MS: 1 CopyPart- 00:06:05.095 [2024-05-16 20:04:52.217514] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.095 [2024-05-16 20:04:52.217539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.095 [2024-05-16 20:04:52.217591] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:05.095 [2024-05-16 20:04:52.217605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:05.354 #40 NEW cov: 12135 ft: 14628 corp: 29/3767b lim: 320 exec/s: 40 rss: 73Mb L: 167/215 MS: 1 CopyPart- 00:06:05.354 [2024-05-16 20:04:52.267520] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:05.354 [2024-05-16 20:04:52.267545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.354 #41 NEW cov: 12135 ft: 14634 corp: 30/3892b lim: 320 exec/s: 41 rss: 73Mb L: 125/215 MS: 1 ChangeBit- 00:06:05.354 [2024-05-16 20:04:52.307605] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:05.354 [2024-05-16 20:04:52.307630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.354 #42 NEW cov: 12135 ft: 14673 corp: 31/4017b lim: 320 exec/s: 42 rss: 73Mb L: 125/215 MS: 1 ChangeBit- 00:06:05.354 [2024-05-16 20:04:52.357805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (44) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:05.354 [2024-05-16 20:04:52.357828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.354 #43 NEW cov: 12135 ft: 14750 corp: 32/4129b lim: 320 exec/s: 43 rss: 74Mb L: 112/215 MS: 1 ChangeBit- 00:06:05.354 [2024-05-16 20:04:52.408190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.354 [2024-05-16 20:04:52.408215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.354 [2024-05-16 20:04:52.408270] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:05.354 [2024-05-16 20:04:52.408282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:05.354 [2024-05-16 20:04:52.408351] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:05.354 [2024-05-16 20:04:52.408363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:05.355 #44 NEW cov: 12135 ft: 14758 corp: 33/4359b lim: 320 exec/s: 44 rss: 74Mb L: 230/230 MS: 1 CopyPart- 00:06:05.355 [2024-05-16 20:04:52.458063] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2a) qid:0 cid:4 nsid:ff000000 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.355 [2024-05-16 20:04:52.458088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.355 #45 NEW cov: 12135 ft: 14763 corp: 34/4435b lim: 320 exec/s: 45 rss: 74Mb L: 76/230 MS: 1 EraseBytes- 00:06:05.614 [2024-05-16 20:04:52.508369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.614 [2024-05-16 20:04:52.508395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.614 [2024-05-16 20:04:52.508447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000000f0 cdw11:00000000 00:06:05.614 [2024-05-16 20:04:52.508466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:05.614 #46 NEW cov: 12135 ft: 14776 corp: 35/4602b lim: 320 exec/s: 46 rss: 74Mb L: 167/230 MS: 1 ChangeByte- 00:06:05.614 [2024-05-16 20:04:52.548434] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (44) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:49ad3202 00:06:05.614 [2024-05-16 20:04:52.548463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.614 [2024-05-16 20:04:52.548516] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:05.614 [2024-05-16 20:04:52.548527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:05.614 #47 NEW cov: 12135 ft: 14780 corp: 36/4738b lim: 320 exec/s: 47 rss: 74Mb L: 136/230 MS: 1 ChangeBinInt- 00:06:05.614 [2024-05-16 20:04:52.598636] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2a) qid:0 cid:4 nsid:ff000000 cdw10:00210000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.614 [2024-05-16 20:04:52.598659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.614 [2024-05-16 20:04:52.598710] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:05.614 [2024-05-16 20:04:52.598721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:05.614 #48 NEW cov: 12135 ft: 14793 corp: 37/4866b lim: 320 exec/s: 48 rss: 74Mb L: 128/230 MS: 1 InsertByte- 00:06:05.614 [2024-05-16 20:04:52.638772] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.614 [2024-05-16 20:04:52.638796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.614 [2024-05-16 20:04:52.638849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:05.614 [2024-05-16 20:04:52.638861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:05.614 #51 NEW cov: 12152 ft: 14839 corp: 38/5010b lim: 320 exec/s: 51 rss: 74Mb L: 144/230 MS: 3 InsertRepeatedBytes-ChangeByte-InsertRepeatedBytes- 00:06:05.614 [2024-05-16 20:04:52.678858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (44) qid:0 cid:4 nsid:0 cdw10:00002000 cdw11:00000000 00:06:05.614 [2024-05-16 20:04:52.678881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.614 [2024-05-16 20:04:52.678932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:05.614 [2024-05-16 20:04:52.678943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:05.614 #52 NEW cov: 12152 ft: 14842 corp: 39/5147b lim: 320 exec/s: 52 rss: 74Mb L: 137/230 MS: 1 InsertByte- 00:06:05.614 [2024-05-16 20:04:52.719134] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (44) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:05.614 [2024-05-16 20:04:52.719159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.614 [2024-05-16 20:04:52.719211] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:05.614 [2024-05-16 20:04:52.719223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:05.614 [2024-05-16 20:04:52.719272] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:05.614 [2024-05-16 20:04:52.719283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:05.614 #53 NEW cov: 12152 ft: 14850 corp: 40/5400b lim: 320 exec/s: 53 rss: 74Mb L: 253/253 MS: 1 CrossOver- 00:06:05.614 [2024-05-16 20:04:52.759002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.614 [2024-05-16 20:04:52.759027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.874 #54 NEW cov: 12152 ft: 14859 corp: 41/5508b lim: 320 exec/s: 54 rss: 74Mb L: 108/253 MS: 1 ChangeBit- 00:06:05.874 [2024-05-16 20:04:52.799336] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (44) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:05.874 [2024-05-16 20:04:52.799359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.874 [2024-05-16 20:04:52.799410] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:05.874 [2024-05-16 20:04:52.799421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:05.874 [2024-05-16 20:04:52.799476] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:05.874 [2024-05-16 20:04:52.799488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:05.874 #55 NEW cov: 12152 ft: 14903 corp: 42/5761b lim: 320 exec/s: 55 rss: 74Mb L: 253/253 MS: 1 ShuffleBytes- 00:06:05.874 [2024-05-16 20:04:52.849344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (44) qid:0 cid:4 nsid:0 cdw10:00002000 cdw11:00000000 00:06:05.874 [2024-05-16 20:04:52.849368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.874 [2024-05-16 20:04:52.849420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:6950a506 cdw11:00a2dc91 00:06:05.874 [2024-05-16 20:04:52.849431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:05.874 #56 NEW cov: 12152 ft: 14914 corp: 43/5905b lim: 320 exec/s: 56 rss: 74Mb L: 144/253 MS: 1 PersAutoDict- DE: "\000\006\245Pi\221\334\242"- 00:06:05.874 [2024-05-16 20:04:52.889315] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (c4) qid:0 cid:4 nsid:c4c4c4c4 cdw10:c4c4c4c4 cdw11:c4c4c4c4 00:06:05.874 [2024-05-16 20:04:52.889339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.874 #57 NEW cov: 12152 ft: 14965 corp: 44/6004b lim: 320 exec/s: 28 rss: 74Mb L: 99/253 MS: 1 InsertByte- 00:06:05.874 #57 DONE cov: 12152 ft: 14965 corp: 44/6004b lim: 320 exec/s: 28 rss: 74Mb 00:06:05.874 ###### Recommended dictionary. ###### 00:06:05.874 "\000\006\245Pi\221\334\242" # Uses: 1 00:06:05.874 "\001\000\000\000\0022\255I" # Uses: 0 00:06:05.874 ###### End of recommended dictionary. ###### 00:06:05.874 Done 57 runs in 2 second(s) 00:06:05.874 [2024-05-16 20:04:52.923803] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:06.134 20:04:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_0.conf /var/tmp/suppress_nvmf_fuzz 00:06:06.134 20:04:53 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:06.134 20:04:53 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:06.134 20:04:53 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:06:06.134 20:04:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=1 00:06:06.134 20:04:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:06.134 20:04:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:06.134 20:04:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:06.134 20:04:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_1.conf 00:06:06.134 20:04:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:06.134 20:04:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:06.134 20:04:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 1 00:06:06.134 20:04:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4401 00:06:06.134 20:04:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:06.134 20:04:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' 00:06:06.134 20:04:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4401"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:06.134 20:04:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:06.134 20:04:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:06.134 20:04:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' -c /tmp/fuzz_json_1.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 -Z 1 00:06:06.134 [2024-05-16 20:04:53.092070] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:06.134 [2024-05-16 20:04:53.092144] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1663433 ] 00:06:06.134 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.134 [2024-05-16 20:04:53.274953] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.394 [2024-05-16 20:04:53.339790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.394 [2024-05-16 20:04:53.398122] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:06.394 [2024-05-16 20:04:53.414089] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:06.394 [2024-05-16 20:04:53.414431] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4401 *** 00:06:06.394 INFO: Running with entropic power schedule (0xFF, 100). 00:06:06.394 INFO: Seed: 4063660892 00:06:06.394 INFO: Loaded 1 modules (357283 inline 8-bit counters): 357283 [0x299c0cc, 0x29f346f), 00:06:06.394 INFO: Loaded 1 PC tables (357283 PCs): 357283 [0x29f3470,0x2f66ea0), 00:06:06.394 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:06.394 INFO: A corpus is not provided, starting from an empty corpus 00:06:06.394 #2 INITED exec/s: 0 rss: 63Mb 00:06:06.394 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:06.394 This may also happen if the target rejected all inputs we tried so far 00:06:06.394 [2024-05-16 20:04:53.459576] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:06.394 [2024-05-16 20:04:53.459704] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:06.394 [2024-05-16 20:04:53.459819] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:06.394 [2024-05-16 20:04:53.459930] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:06.394 [2024-05-16 20:04:53.460042] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d0a 00:06:06.394 [2024-05-16 20:04:53.460261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.394 [2024-05-16 20:04:53.460293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:06.394 [2024-05-16 20:04:53.460347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.394 [2024-05-16 20:04:53.460361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:06.394 [2024-05-16 20:04:53.460411] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.394 [2024-05-16 20:04:53.460425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:06.394 [2024-05-16 20:04:53.460477] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.394 [2024-05-16 20:04:53.460491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:06.394 [2024-05-16 20:04:53.460542] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.394 [2024-05-16 20:04:53.460556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:06.659 NEW_FUNC[1/686]: 0x483620 in fuzz_admin_get_log_page_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:67 00:06:06.659 NEW_FUNC[2/686]: 0x4bf420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:06.659 #19 NEW cov: 11862 ft: 11863 corp: 2/31b lim: 30 exec/s: 0 rss: 71Mb L: 30/30 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:06:06.659 [2024-05-16 20:04:53.599859] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10248) > buf size (4096) 00:06:06.659 [2024-05-16 20:04:53.600099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a010000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.659 [2024-05-16 20:04:53.600129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:06.659 #20 NEW cov: 12021 ft: 13138 corp: 3/40b lim: 30 exec/s: 0 rss: 71Mb L: 9/30 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\000"- 00:06:06.659 [2024-05-16 20:04:53.640058] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:06.659 [2024-05-16 20:04:53.640185] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:06.659 [2024-05-16 20:04:53.640306] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:06.659 [2024-05-16 20:04:53.640424] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:06.659 [2024-05-16 20:04:53.640548] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d0a 00:06:06.659 [2024-05-16 20:04:53.640779] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.659 [2024-05-16 20:04:53.640805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:06.659 [2024-05-16 20:04:53.640859] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.659 [2024-05-16 20:04:53.640874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:06.659 [2024-05-16 20:04:53.640929] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.659 [2024-05-16 20:04:53.640943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:06.659 [2024-05-16 20:04:53.640995] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.659 [2024-05-16 20:04:53.641009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:06.659 [2024-05-16 20:04:53.641061] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:3d0a813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.659 [2024-05-16 20:04:53.641074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:06.659 #26 NEW cov: 12027 ft: 13363 corp: 4/70b lim: 30 exec/s: 0 rss: 71Mb L: 30/30 MS: 1 CrossOver- 00:06:06.659 [2024-05-16 20:04:53.690146] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:06.659 [2024-05-16 20:04:53.690272] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:06.659 [2024-05-16 20:04:53.690387] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:06.659 [2024-05-16 20:04:53.690529] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:06.659 [2024-05-16 20:04:53.690650] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d0a 00:06:06.659 [2024-05-16 20:04:53.690874] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.659 [2024-05-16 20:04:53.690901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:06.659 [2024-05-16 20:04:53.690957] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.659 [2024-05-16 20:04:53.690970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:06.659 [2024-05-16 20:04:53.691021] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.659 [2024-05-16 20:04:53.691033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:06.659 [2024-05-16 20:04:53.691085] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:2a3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.659 [2024-05-16 20:04:53.691098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:06.659 [2024-05-16 20:04:53.691148] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:3d0a813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.659 [2024-05-16 20:04:53.691160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:06.659 #27 NEW cov: 12112 ft: 13652 corp: 5/100b lim: 30 exec/s: 0 rss: 72Mb L: 30/30 MS: 1 ChangeByte- 00:06:06.659 [2024-05-16 20:04:53.740304] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:06.659 [2024-05-16 20:04:53.740431] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:06.659 [2024-05-16 20:04:53.740554] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:06.659 [2024-05-16 20:04:53.740668] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:06.659 [2024-05-16 20:04:53.740786] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d0a 00:06:06.659 [2024-05-16 20:04:53.741014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.660 [2024-05-16 20:04:53.741040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:06.660 [2024-05-16 20:04:53.741093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.660 [2024-05-16 20:04:53.741106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:06.660 [2024-05-16 20:04:53.741157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.660 [2024-05-16 20:04:53.741170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:06.660 [2024-05-16 20:04:53.741226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.660 [2024-05-16 20:04:53.741239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:06.660 [2024-05-16 20:04:53.741289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:3d0a813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.660 [2024-05-16 20:04:53.741306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:06.660 #28 NEW cov: 12112 ft: 13816 corp: 6/130b lim: 30 exec/s: 0 rss: 72Mb L: 30/30 MS: 1 ChangeByte- 00:06:06.660 [2024-05-16 20:04:53.780398] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:06.660 [2024-05-16 20:04:53.780527] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:06.660 [2024-05-16 20:04:53.780646] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:06.660 [2024-05-16 20:04:53.780761] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:06.660 [2024-05-16 20:04:53.780881] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d0a 00:06:06.660 [2024-05-16 20:04:53.781102] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.660 [2024-05-16 20:04:53.781128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:06.660 [2024-05-16 20:04:53.781182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.660 [2024-05-16 20:04:53.781196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:06.660 [2024-05-16 20:04:53.781248] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.660 [2024-05-16 20:04:53.781260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:06.660 [2024-05-16 20:04:53.781314] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:2a3d813a cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.660 [2024-05-16 20:04:53.781325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:06.660 [2024-05-16 20:04:53.781379] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:3d0a813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.660 [2024-05-16 20:04:53.781391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:06.920 #29 NEW cov: 12112 ft: 13984 corp: 7/160b lim: 30 exec/s: 0 rss: 72Mb L: 30/30 MS: 1 ChangeBinInt- 00:06:06.920 [2024-05-16 20:04:53.830592] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:06.920 [2024-05-16 20:04:53.830724] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:06.920 [2024-05-16 20:04:53.830843] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:06.920 [2024-05-16 20:04:53.830961] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:06.920 [2024-05-16 20:04:53.831081] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300003d0a 00:06:06.920 [2024-05-16 20:04:53.831320] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.920 [2024-05-16 20:04:53.831346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:06.920 [2024-05-16 20:04:53.831405] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.920 [2024-05-16 20:04:53.831418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:06.920 [2024-05-16 20:04:53.831476] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.920 [2024-05-16 20:04:53.831489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:06.920 [2024-05-16 20:04:53.831539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.920 [2024-05-16 20:04:53.831551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:06.920 [2024-05-16 20:04:53.831603] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:3d0a833d cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.920 [2024-05-16 20:04:53.831615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:06.920 #30 NEW cov: 12112 ft: 14030 corp: 8/190b lim: 30 exec/s: 0 rss: 72Mb L: 30/30 MS: 1 ChangeByte- 00:06:06.920 [2024-05-16 20:04:53.880686] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:06.920 [2024-05-16 20:04:53.880812] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:06.920 [2024-05-16 20:04:53.880933] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:06.920 [2024-05-16 20:04:53.881050] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:06.920 [2024-05-16 20:04:53.881167] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d0a 00:06:06.920 [2024-05-16 20:04:53.881394] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.920 [2024-05-16 20:04:53.881419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:06.920 [2024-05-16 20:04:53.881477] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.920 [2024-05-16 20:04:53.881491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:06.920 [2024-05-16 20:04:53.881544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.920 [2024-05-16 20:04:53.881556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:06.920 [2024-05-16 20:04:53.881607] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:2a3d813a cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.920 [2024-05-16 20:04:53.881619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:06.920 [2024-05-16 20:04:53.881673] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:3d0a813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.920 [2024-05-16 20:04:53.881685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:06.920 #31 NEW cov: 12112 ft: 14049 corp: 9/220b lim: 30 exec/s: 0 rss: 72Mb L: 30/30 MS: 1 ShuffleBytes- 00:06:06.920 [2024-05-16 20:04:53.930847] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:06.920 [2024-05-16 20:04:53.930976] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:06.920 [2024-05-16 20:04:53.931097] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:06.920 [2024-05-16 20:04:53.931214] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:06.920 [2024-05-16 20:04:53.931335] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d0a 00:06:06.920 [2024-05-16 20:04:53.931569] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.920 [2024-05-16 20:04:53.931595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:06.920 [2024-05-16 20:04:53.931648] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.920 [2024-05-16 20:04:53.931661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:06.920 [2024-05-16 20:04:53.931712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.920 [2024-05-16 20:04:53.931724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:06.920 [2024-05-16 20:04:53.931778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.920 [2024-05-16 20:04:53.931790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:06.920 [2024-05-16 20:04:53.931857] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:c3f581c2 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.920 [2024-05-16 20:04:53.931869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:06.920 #32 NEW cov: 12112 ft: 14096 corp: 10/250b lim: 30 exec/s: 0 rss: 72Mb L: 30/30 MS: 1 ChangeBinInt- 00:06:06.920 [2024-05-16 20:04:53.970930] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:06.920 [2024-05-16 20:04:53.971057] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:06.920 [2024-05-16 20:04:53.971176] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:06.921 [2024-05-16 20:04:53.971311] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:06.921 [2024-05-16 20:04:53.971435] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300003d0a 00:06:06.921 [2024-05-16 20:04:53.971669] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.921 [2024-05-16 20:04:53.971694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:06.921 [2024-05-16 20:04:53.971748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.921 [2024-05-16 20:04:53.971760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:06.921 [2024-05-16 20:04:53.971815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:3d3d810f cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.921 [2024-05-16 20:04:53.971827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:06.921 [2024-05-16 20:04:53.971880] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.921 [2024-05-16 20:04:53.971893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:06.921 [2024-05-16 20:04:53.971948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:3d0a833d cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.921 [2024-05-16 20:04:53.971961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:06.921 #33 NEW cov: 12112 ft: 14192 corp: 11/280b lim: 30 exec/s: 0 rss: 72Mb L: 30/30 MS: 1 CopyPart- 00:06:06.921 [2024-05-16 20:04:54.021101] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:06.921 [2024-05-16 20:04:54.021227] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:06.921 [2024-05-16 20:04:54.021341] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:06.921 [2024-05-16 20:04:54.021463] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:06.921 [2024-05-16 20:04:54.021598] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d0a 00:06:06.921 [2024-05-16 20:04:54.021823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:b33d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.921 [2024-05-16 20:04:54.021848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:06.921 [2024-05-16 20:04:54.021904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.921 [2024-05-16 20:04:54.021916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:06.921 [2024-05-16 20:04:54.021981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.921 [2024-05-16 20:04:54.021995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:06.921 [2024-05-16 20:04:54.022045] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.921 [2024-05-16 20:04:54.022057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:06.921 [2024-05-16 20:04:54.022111] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:c3f581c2 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.921 [2024-05-16 20:04:54.022123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:06.921 #34 NEW cov: 12112 ft: 14259 corp: 12/310b lim: 30 exec/s: 0 rss: 72Mb L: 30/30 MS: 1 ChangeByte- 00:06:07.179 [2024-05-16 20:04:54.071224] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:07.179 [2024-05-16 20:04:54.071358] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:07.179 [2024-05-16 20:04:54.071484] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:07.179 [2024-05-16 20:04:54.071609] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:07.179 [2024-05-16 20:04:54.071731] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300003d0a 00:06:07.179 [2024-05-16 20:04:54.071987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.179 [2024-05-16 20:04:54.072012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.179 [2024-05-16 20:04:54.072066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:7e3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.179 [2024-05-16 20:04:54.072079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.179 [2024-05-16 20:04:54.072129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:3d3d810f cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.179 [2024-05-16 20:04:54.072143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:07.180 [2024-05-16 20:04:54.072196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.180 [2024-05-16 20:04:54.072209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:07.180 [2024-05-16 20:04:54.072260] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:3d0a833d cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.180 [2024-05-16 20:04:54.072272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:07.180 #35 NEW cov: 12112 ft: 14278 corp: 13/340b lim: 30 exec/s: 0 rss: 72Mb L: 30/30 MS: 1 ChangeByte- 00:06:07.180 [2024-05-16 20:04:54.121398] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:07.180 [2024-05-16 20:04:54.121536] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:07.180 [2024-05-16 20:04:54.121656] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:07.180 [2024-05-16 20:04:54.121769] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:07.180 [2024-05-16 20:04:54.121888] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d0a 00:06:07.180 [2024-05-16 20:04:54.122139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.180 [2024-05-16 20:04:54.122162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.180 [2024-05-16 20:04:54.122218] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.180 [2024-05-16 20:04:54.122231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.180 [2024-05-16 20:04:54.122282] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:613d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.180 [2024-05-16 20:04:54.122294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:07.180 [2024-05-16 20:04:54.122346] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:2a3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.180 [2024-05-16 20:04:54.122358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:07.180 [2024-05-16 20:04:54.122408] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:3d0a813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.180 [2024-05-16 20:04:54.122420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:07.180 #36 NEW cov: 12112 ft: 14292 corp: 14/370b lim: 30 exec/s: 0 rss: 72Mb L: 30/30 MS: 1 ChangeByte- 00:06:07.180 [2024-05-16 20:04:54.161475] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:07.180 [2024-05-16 20:04:54.161624] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:07.180 [2024-05-16 20:04:54.161744] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:07.180 [2024-05-16 20:04:54.161862] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:07.180 [2024-05-16 20:04:54.161982] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d0a 00:06:07.180 [2024-05-16 20:04:54.162217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.180 [2024-05-16 20:04:54.162245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.180 [2024-05-16 20:04:54.162300] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.180 [2024-05-16 20:04:54.162313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.180 [2024-05-16 20:04:54.162366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.180 [2024-05-16 20:04:54.162379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:07.180 [2024-05-16 20:04:54.162432] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:2a3d8161 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.180 [2024-05-16 20:04:54.162445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:07.180 [2024-05-16 20:04:54.162501] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:3d0a813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.180 [2024-05-16 20:04:54.162514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:07.180 #37 NEW cov: 12112 ft: 14315 corp: 15/400b lim: 30 exec/s: 0 rss: 72Mb L: 30/30 MS: 1 CrossOver- 00:06:07.180 [2024-05-16 20:04:54.201505] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:06:07.180 [2024-05-16 20:04:54.201730] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.180 [2024-05-16 20:04:54.201754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.180 #38 NEW cov: 12112 ft: 14368 corp: 16/409b lim: 30 exec/s: 0 rss: 72Mb L: 9/30 MS: 1 CopyPart- 00:06:07.180 [2024-05-16 20:04:54.251732] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:07.180 [2024-05-16 20:04:54.251860] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:07.180 [2024-05-16 20:04:54.251973] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:07.180 [2024-05-16 20:04:54.252086] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:07.180 [2024-05-16 20:04:54.252206] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d0a 00:06:07.180 [2024-05-16 20:04:54.252438] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.180 [2024-05-16 20:04:54.252466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.180 [2024-05-16 20:04:54.252521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.180 [2024-05-16 20:04:54.252534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.180 [2024-05-16 20:04:54.252589] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:613b813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.180 [2024-05-16 20:04:54.252602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:07.180 [2024-05-16 20:04:54.252656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:2a3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.180 [2024-05-16 20:04:54.252671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:07.180 [2024-05-16 20:04:54.252724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:3d0a813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.180 [2024-05-16 20:04:54.252736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:07.180 #39 NEW cov: 12112 ft: 14402 corp: 17/439b lim: 30 exec/s: 0 rss: 72Mb L: 30/30 MS: 1 ChangeBinInt- 00:06:07.180 [2024-05-16 20:04:54.301760] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:07.180 [2024-05-16 20:04:54.301997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.180 [2024-05-16 20:04:54.302021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.180 #42 NEW cov: 12112 ft: 14441 corp: 18/449b lim: 30 exec/s: 0 rss: 72Mb L: 10/30 MS: 3 ShuffleBytes-InsertByte-CrossOver- 00:06:07.439 [2024-05-16 20:04:54.342001] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:07.439 [2024-05-16 20:04:54.342130] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100000f3d 00:06:07.439 [2024-05-16 20:04:54.342250] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:07.439 [2024-05-16 20:04:54.342367] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:07.439 [2024-05-16 20:04:54.342491] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300003d0a 00:06:07.439 [2024-05-16 20:04:54.342716] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.439 [2024-05-16 20:04:54.342741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.439 [2024-05-16 20:04:54.342795] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.439 [2024-05-16 20:04:54.342808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.439 [2024-05-16 20:04:54.342862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.439 [2024-05-16 20:04:54.342875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:07.439 [2024-05-16 20:04:54.342927] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.439 [2024-05-16 20:04:54.342939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:07.439 [2024-05-16 20:04:54.342992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:3d0a833d cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.439 [2024-05-16 20:04:54.343004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:07.439 NEW_FUNC[1/1]: 0x1a6ef60 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:07.439 #43 NEW cov: 12135 ft: 14489 corp: 19/479b lim: 30 exec/s: 0 rss: 72Mb L: 30/30 MS: 1 ShuffleBytes- 00:06:07.439 [2024-05-16 20:04:54.382102] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:07.439 [2024-05-16 20:04:54.382228] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (324856) > buf size (4096) 00:06:07.439 [2024-05-16 20:04:54.382346] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x3d 00:06:07.439 [2024-05-16 20:04:54.382471] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:07.439 [2024-05-16 20:04:54.382591] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d0a 00:06:07.439 [2024-05-16 20:04:54.382812] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.439 [2024-05-16 20:04:54.382837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.439 [2024-05-16 20:04:54.382892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.439 [2024-05-16 20:04:54.382905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.439 [2024-05-16 20:04:54.382962] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.439 [2024-05-16 20:04:54.382975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:07.439 [2024-05-16 20:04:54.383029] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:2a3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.439 [2024-05-16 20:04:54.383041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:07.439 [2024-05-16 20:04:54.383094] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:3d0a813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.439 [2024-05-16 20:04:54.383106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:07.439 #44 NEW cov: 12135 ft: 14505 corp: 20/509b lim: 30 exec/s: 0 rss: 72Mb L: 30/30 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:06:07.439 [2024-05-16 20:04:54.422206] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:07.439 [2024-05-16 20:04:54.422336] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100002a3d 00:06:07.439 [2024-05-16 20:04:54.422462] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d0a 00:06:07.439 [2024-05-16 20:04:54.422691] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.439 [2024-05-16 20:04:54.422716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.439 [2024-05-16 20:04:54.422772] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.439 [2024-05-16 20:04:54.422785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.439 [2024-05-16 20:04:54.422839] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:3a3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.439 [2024-05-16 20:04:54.422852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:07.439 #45 NEW cov: 12135 ft: 14745 corp: 21/531b lim: 30 exec/s: 45 rss: 72Mb L: 22/30 MS: 1 EraseBytes- 00:06:07.439 [2024-05-16 20:04:54.472388] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:07.439 [2024-05-16 20:04:54.472518] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:07.439 [2024-05-16 20:04:54.472637] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:07.439 [2024-05-16 20:04:54.472753] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:07.439 [2024-05-16 20:04:54.472867] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d0a 00:06:07.439 [2024-05-16 20:04:54.473104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:b33d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.439 [2024-05-16 20:04:54.473129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.439 [2024-05-16 20:04:54.473185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.439 [2024-05-16 20:04:54.473198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.439 [2024-05-16 20:04:54.473249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.439 [2024-05-16 20:04:54.473262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:07.439 [2024-05-16 20:04:54.473315] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.439 [2024-05-16 20:04:54.473328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:07.439 [2024-05-16 20:04:54.473383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:c3f581c2 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.439 [2024-05-16 20:04:54.473396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:07.439 #46 NEW cov: 12135 ft: 14754 corp: 22/561b lim: 30 exec/s: 46 rss: 72Mb L: 30/30 MS: 1 ShuffleBytes- 00:06:07.439 [2024-05-16 20:04:54.522578] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:07.439 [2024-05-16 20:04:54.522721] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:07.439 [2024-05-16 20:04:54.522858] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:07.439 [2024-05-16 20:04:54.522975] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:07.439 [2024-05-16 20:04:54.523094] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d0a 00:06:07.439 [2024-05-16 20:04:54.523317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3d8c813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.439 [2024-05-16 20:04:54.523343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.439 [2024-05-16 20:04:54.523397] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.439 [2024-05-16 20:04:54.523411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.439 [2024-05-16 20:04:54.523466] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.439 [2024-05-16 20:04:54.523479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:07.439 [2024-05-16 20:04:54.523534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.439 [2024-05-16 20:04:54.523548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:07.439 [2024-05-16 20:04:54.523605] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:3d0a813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.439 [2024-05-16 20:04:54.523617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:07.439 #47 NEW cov: 12135 ft: 14765 corp: 23/591b lim: 30 exec/s: 47 rss: 73Mb L: 30/30 MS: 1 ChangeByte- 00:06:07.439 [2024-05-16 20:04:54.562629] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:07.439 [2024-05-16 20:04:54.562753] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:07.439 [2024-05-16 20:04:54.562870] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:07.439 [2024-05-16 20:04:54.562984] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:07.439 [2024-05-16 20:04:54.563098] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d0a 00:06:07.439 [2024-05-16 20:04:54.563333] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:b33d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.439 [2024-05-16 20:04:54.563359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.439 [2024-05-16 20:04:54.563415] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.439 [2024-05-16 20:04:54.563429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.439 [2024-05-16 20:04:54.563481] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:403d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.439 [2024-05-16 20:04:54.563492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:07.439 [2024-05-16 20:04:54.563546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.439 [2024-05-16 20:04:54.563559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:07.439 [2024-05-16 20:04:54.563616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:c3f581c2 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.439 [2024-05-16 20:04:54.563629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:07.747 #48 NEW cov: 12135 ft: 14780 corp: 24/621b lim: 30 exec/s: 48 rss: 73Mb L: 30/30 MS: 1 ChangeByte- 00:06:07.747 [2024-05-16 20:04:54.612787] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:07.747 [2024-05-16 20:04:54.612914] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:07.747 [2024-05-16 20:04:54.613037] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:07.747 [2024-05-16 20:04:54.613154] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (43256) > buf size (4096) 00:06:07.747 [2024-05-16 20:04:54.613271] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x3d0a 00:06:07.747 [2024-05-16 20:04:54.613504] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.747 [2024-05-16 20:04:54.613530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.747 [2024-05-16 20:04:54.613584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.747 [2024-05-16 20:04:54.613597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.747 [2024-05-16 20:04:54.613653] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.747 [2024-05-16 20:04:54.613665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:07.747 [2024-05-16 20:04:54.613724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:2a3d0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.747 [2024-05-16 20:04:54.613736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:07.747 [2024-05-16 20:04:54.613789] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.747 [2024-05-16 20:04:54.613802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:07.747 #49 NEW cov: 12135 ft: 14787 corp: 25/651b lim: 30 exec/s: 49 rss: 73Mb L: 30/30 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:06:07.747 [2024-05-16 20:04:54.662940] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:07.747 [2024-05-16 20:04:54.663067] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:07.747 [2024-05-16 20:04:54.663187] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:07.747 [2024-05-16 20:04:54.663304] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:07.747 [2024-05-16 20:04:54.663427] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300003d0a 00:06:07.747 [2024-05-16 20:04:54.663664] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3d8c813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.747 [2024-05-16 20:04:54.663688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.747 [2024-05-16 20:04:54.663745] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.747 [2024-05-16 20:04:54.663758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.747 [2024-05-16 20:04:54.663814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.747 [2024-05-16 20:04:54.663827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:07.747 [2024-05-16 20:04:54.663878] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.747 [2024-05-16 20:04:54.663890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:07.747 [2024-05-16 20:04:54.663942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:3d0a833d cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.747 [2024-05-16 20:04:54.663954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:07.748 #50 NEW cov: 12135 ft: 14799 corp: 26/681b lim: 30 exec/s: 50 rss: 73Mb L: 30/30 MS: 1 CrossOver- 00:06:07.748 [2024-05-16 20:04:54.713018] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:06:07.748 [2024-05-16 20:04:54.713472] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.748 [2024-05-16 20:04:54.713498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.748 [2024-05-16 20:04:54.713573] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.748 [2024-05-16 20:04:54.713586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.748 [2024-05-16 20:04:54.713643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.748 [2024-05-16 20:04:54.713659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:07.748 #52 NEW cov: 12152 ft: 14881 corp: 27/703b lim: 30 exec/s: 52 rss: 73Mb L: 22/30 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:06:07.748 [2024-05-16 20:04:54.753211] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:07.748 [2024-05-16 20:04:54.753339] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:07.748 [2024-05-16 20:04:54.753463] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:07.748 [2024-05-16 20:04:54.753577] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:07.748 [2024-05-16 20:04:54.753695] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d0a 00:06:07.748 [2024-05-16 20:04:54.753931] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.748 [2024-05-16 20:04:54.753956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.748 [2024-05-16 20:04:54.754011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.748 [2024-05-16 20:04:54.754025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.748 [2024-05-16 20:04:54.754081] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.748 [2024-05-16 20:04:54.754093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:07.748 [2024-05-16 20:04:54.754147] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.748 [2024-05-16 20:04:54.754159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:07.748 [2024-05-16 20:04:54.754214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:1e00813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.748 [2024-05-16 20:04:54.754226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:07.748 #53 NEW cov: 12152 ft: 14932 corp: 28/733b lim: 30 exec/s: 53 rss: 73Mb L: 30/30 MS: 1 ChangeBinInt- 00:06:07.748 [2024-05-16 20:04:54.793320] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:07.748 [2024-05-16 20:04:54.793445] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300003d3d 00:06:07.748 [2024-05-16 20:04:54.793582] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:07.748 [2024-05-16 20:04:54.793701] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:07.748 [2024-05-16 20:04:54.793813] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d0a 00:06:07.748 [2024-05-16 20:04:54.794052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.748 [2024-05-16 20:04:54.794078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.748 [2024-05-16 20:04:54.794136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3d3d833d cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.748 [2024-05-16 20:04:54.794149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.748 [2024-05-16 20:04:54.794204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.748 [2024-05-16 20:04:54.794219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:07.748 [2024-05-16 20:04:54.794273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:2a3d8161 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.748 [2024-05-16 20:04:54.794286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:07.748 [2024-05-16 20:04:54.794341] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:3d0a813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.748 [2024-05-16 20:04:54.794354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:07.748 #54 NEW cov: 12152 ft: 14937 corp: 29/763b lim: 30 exec/s: 54 rss: 73Mb L: 30/30 MS: 1 ChangeBinInt- 00:06:07.748 [2024-05-16 20:04:54.833448] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:07.748 [2024-05-16 20:04:54.833602] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:07.748 [2024-05-16 20:04:54.833725] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:07.748 [2024-05-16 20:04:54.833841] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:07.748 [2024-05-16 20:04:54.833962] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d0a 00:06:07.748 [2024-05-16 20:04:54.834192] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.748 [2024-05-16 20:04:54.834217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.748 [2024-05-16 20:04:54.834276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.748 [2024-05-16 20:04:54.834290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.748 [2024-05-16 20:04:54.834348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.748 [2024-05-16 20:04:54.834361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:07.748 [2024-05-16 20:04:54.834414] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.748 [2024-05-16 20:04:54.834427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:07.748 [2024-05-16 20:04:54.834483] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:c3c281f5 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.748 [2024-05-16 20:04:54.834497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:07.748 #55 NEW cov: 12152 ft: 14946 corp: 30/793b lim: 30 exec/s: 55 rss: 73Mb L: 30/30 MS: 1 ShuffleBytes- 00:06:07.748 [2024-05-16 20:04:54.873575] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:07.748 [2024-05-16 20:04:54.873700] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:07.748 [2024-05-16 20:04:54.873823] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:07.748 [2024-05-16 20:04:54.873939] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:07.748 [2024-05-16 20:04:54.874059] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d0a 00:06:07.748 [2024-05-16 20:04:54.874291] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.748 [2024-05-16 20:04:54.874320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.748 [2024-05-16 20:04:54.874375] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.748 [2024-05-16 20:04:54.874389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.748 [2024-05-16 20:04:54.874443] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.748 [2024-05-16 20:04:54.874462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:07.748 [2024-05-16 20:04:54.874517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.748 [2024-05-16 20:04:54.874531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:07.748 [2024-05-16 20:04:54.874587] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:c3f581c2 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.748 [2024-05-16 20:04:54.874600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:08.034 #56 NEW cov: 12152 ft: 14965 corp: 31/823b lim: 30 exec/s: 56 rss: 73Mb L: 30/30 MS: 1 ChangeByte- 00:06:08.034 [2024-05-16 20:04:54.913744] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:08.034 [2024-05-16 20:04:54.913875] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:08.034 [2024-05-16 20:04:54.913998] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:08.034 [2024-05-16 20:04:54.914117] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:08.034 [2024-05-16 20:04:54.914232] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d0a 00:06:08.034 [2024-05-16 20:04:54.914468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.034 [2024-05-16 20:04:54.914493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:08.034 [2024-05-16 20:04:54.914550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.034 [2024-05-16 20:04:54.914564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:08.034 [2024-05-16 20:04:54.914620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.034 [2024-05-16 20:04:54.914633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:08.034 [2024-05-16 20:04:54.914686] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:c33d81f5 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.034 [2024-05-16 20:04:54.914699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:08.034 [2024-05-16 20:04:54.914756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:3d3d81c2 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.034 [2024-05-16 20:04:54.914769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:08.034 #57 NEW cov: 12152 ft: 15008 corp: 32/853b lim: 30 exec/s: 57 rss: 73Mb L: 30/30 MS: 1 ShuffleBytes- 00:06:08.034 [2024-05-16 20:04:54.963833] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:08.034 [2024-05-16 20:04:54.963962] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:08.034 [2024-05-16 20:04:54.964078] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x3d3d 00:06:08.034 [2024-05-16 20:04:54.964194] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:08.034 [2024-05-16 20:04:54.964311] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d0a 00:06:08.034 [2024-05-16 20:04:54.964563] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:b33d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.034 [2024-05-16 20:04:54.964588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:08.034 [2024-05-16 20:04:54.964646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.034 [2024-05-16 20:04:54.964661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:08.034 [2024-05-16 20:04:54.964714] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:3d3d003d cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.034 [2024-05-16 20:04:54.964728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:08.034 [2024-05-16 20:04:54.964783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.034 [2024-05-16 20:04:54.964795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:08.034 [2024-05-16 20:04:54.964850] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:c3f581c2 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.034 [2024-05-16 20:04:54.964862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:08.034 #58 NEW cov: 12152 ft: 15038 corp: 33/883b lim: 30 exec/s: 58 rss: 73Mb L: 30/30 MS: 1 ShuffleBytes- 00:06:08.034 [2024-05-16 20:04:55.013982] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (587000) > buf size (4096) 00:06:08.034 [2024-05-16 20:04:55.014221] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:08.034 [2024-05-16 20:04:55.014343] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:08.034 [2024-05-16 20:04:55.014468] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300003d0a 00:06:08.034 [2024-05-16 20:04:55.014700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3d3d023d cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.034 [2024-05-16 20:04:55.014725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:08.034 [2024-05-16 20:04:55.014777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.034 [2024-05-16 20:04:55.014791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:08.034 [2024-05-16 20:04:55.014841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.034 [2024-05-16 20:04:55.014853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:08.034 [2024-05-16 20:04:55.014904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.034 [2024-05-16 20:04:55.014918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:08.034 [2024-05-16 20:04:55.014970] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:3d0a833d cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.034 [2024-05-16 20:04:55.014982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:08.034 #59 NEW cov: 12152 ft: 15074 corp: 34/913b lim: 30 exec/s: 59 rss: 73Mb L: 30/30 MS: 1 CrossOver- 00:06:08.034 [2024-05-16 20:04:55.064098] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:08.034 [2024-05-16 20:04:55.064225] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:08.034 [2024-05-16 20:04:55.064360] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:08.034 [2024-05-16 20:04:55.064489] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:08.034 [2024-05-16 20:04:55.064612] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d0a 00:06:08.034 [2024-05-16 20:04:55.064840] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.034 [2024-05-16 20:04:55.064865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:08.034 [2024-05-16 20:04:55.064920] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.034 [2024-05-16 20:04:55.064932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:08.034 [2024-05-16 20:04:55.064985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.034 [2024-05-16 20:04:55.064997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:08.034 [2024-05-16 20:04:55.065052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.034 [2024-05-16 20:04:55.065064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:08.034 [2024-05-16 20:04:55.065116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:3d0a813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.034 [2024-05-16 20:04:55.065128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:08.034 #60 NEW cov: 12152 ft: 15093 corp: 35/943b lim: 30 exec/s: 60 rss: 74Mb L: 30/30 MS: 1 CopyPart- 00:06:08.034 [2024-05-16 20:04:55.104239] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:08.034 [2024-05-16 20:04:55.104370] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (324856) > buf size (4096) 00:06:08.034 [2024-05-16 20:04:55.104499] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:08.034 [2024-05-16 20:04:55.104619] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:08.034 [2024-05-16 20:04:55.104737] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d0a 00:06:08.034 [2024-05-16 20:04:55.104962] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3d8c813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.035 [2024-05-16 20:04:55.104987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:08.035 [2024-05-16 20:04:55.105041] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.035 [2024-05-16 20:04:55.105056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:08.035 [2024-05-16 20:04:55.105109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.035 [2024-05-16 20:04:55.105122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:08.035 [2024-05-16 20:04:55.105171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.035 [2024-05-16 20:04:55.105183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:08.035 [2024-05-16 20:04:55.105236] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:3d0a813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.035 [2024-05-16 20:04:55.105247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:08.035 #61 NEW cov: 12152 ft: 15114 corp: 36/973b lim: 30 exec/s: 61 rss: 74Mb L: 30/30 MS: 1 ChangeByte- 00:06:08.035 [2024-05-16 20:04:55.144337] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:08.035 [2024-05-16 20:04:55.144469] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:08.035 [2024-05-16 20:04:55.144587] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:08.035 [2024-05-16 20:04:55.144706] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003f3d 00:06:08.035 [2024-05-16 20:04:55.144824] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d0a 00:06:08.035 [2024-05-16 20:04:55.145056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:b33d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.035 [2024-05-16 20:04:55.145079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:08.035 [2024-05-16 20:04:55.145133] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.035 [2024-05-16 20:04:55.145146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:08.035 [2024-05-16 20:04:55.145199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.035 [2024-05-16 20:04:55.145211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:08.035 [2024-05-16 20:04:55.145264] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.035 [2024-05-16 20:04:55.145275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:08.035 [2024-05-16 20:04:55.145330] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:c3f581c2 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.035 [2024-05-16 20:04:55.145341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:08.035 #62 NEW cov: 12152 ft: 15121 corp: 37/1003b lim: 30 exec/s: 62 rss: 74Mb L: 30/30 MS: 1 ChangeBit- 00:06:08.295 [2024-05-16 20:04:55.184481] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:08.295 [2024-05-16 20:04:55.184609] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (324856) > buf size (4096) 00:06:08.295 [2024-05-16 20:04:55.184730] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:08.295 [2024-05-16 20:04:55.184845] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300003d3d 00:06:08.295 [2024-05-16 20:04:55.184970] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d0a 00:06:08.295 [2024-05-16 20:04:55.185199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3d8c813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.295 [2024-05-16 20:04:55.185224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:08.295 [2024-05-16 20:04:55.185278] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.295 [2024-05-16 20:04:55.185290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:08.295 [2024-05-16 20:04:55.185345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.295 [2024-05-16 20:04:55.185357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:08.295 [2024-05-16 20:04:55.185408] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:3d3d833d cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.295 [2024-05-16 20:04:55.185420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:08.295 [2024-05-16 20:04:55.185468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:3d0a813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.295 [2024-05-16 20:04:55.185480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:08.295 #63 NEW cov: 12152 ft: 15175 corp: 38/1033b lim: 30 exec/s: 63 rss: 74Mb L: 30/30 MS: 1 ChangeByte- 00:06:08.295 [2024-05-16 20:04:55.234614] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:08.295 [2024-05-16 20:04:55.234741] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:08.295 [2024-05-16 20:04:55.234862] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:08.295 [2024-05-16 20:04:55.234980] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:08.295 [2024-05-16 20:04:55.235103] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300003d0a 00:06:08.295 [2024-05-16 20:04:55.235332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3d3d8145 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.295 [2024-05-16 20:04:55.235357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:08.295 [2024-05-16 20:04:55.235410] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.295 [2024-05-16 20:04:55.235423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:08.295 [2024-05-16 20:04:55.235467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.295 [2024-05-16 20:04:55.235479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:08.295 [2024-05-16 20:04:55.235530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.295 [2024-05-16 20:04:55.235542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:08.295 [2024-05-16 20:04:55.235595] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:3d0a833d cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.295 [2024-05-16 20:04:55.235610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:08.295 #64 NEW cov: 12152 ft: 15199 corp: 39/1063b lim: 30 exec/s: 64 rss: 74Mb L: 30/30 MS: 1 ChangeBinInt- 00:06:08.295 [2024-05-16 20:04:55.274753] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:08.295 [2024-05-16 20:04:55.274881] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:08.295 [2024-05-16 20:04:55.275004] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:08.295 [2024-05-16 20:04:55.275123] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:08.295 [2024-05-16 20:04:55.275245] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d0a 00:06:08.295 [2024-05-16 20:04:55.275484] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3d8c813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.295 [2024-05-16 20:04:55.275509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:08.296 [2024-05-16 20:04:55.275562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.296 [2024-05-16 20:04:55.275593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:08.296 [2024-05-16 20:04:55.275648] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.296 [2024-05-16 20:04:55.275662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:08.296 [2024-05-16 20:04:55.275715] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.296 [2024-05-16 20:04:55.275729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:08.296 [2024-05-16 20:04:55.275793] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:3d0a813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.296 [2024-05-16 20:04:55.275805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:08.296 #65 NEW cov: 12152 ft: 15208 corp: 40/1093b lim: 30 exec/s: 65 rss: 74Mb L: 30/30 MS: 1 ShuffleBytes- 00:06:08.296 [2024-05-16 20:04:55.314749] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:08.296 [2024-05-16 20:04:55.314878] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100002a3d 00:06:08.296 [2024-05-16 20:04:55.315012] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d0a 00:06:08.296 [2024-05-16 20:04:55.315249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.296 [2024-05-16 20:04:55.315274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:08.296 [2024-05-16 20:04:55.315326] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.296 [2024-05-16 20:04:55.315340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:08.296 [2024-05-16 20:04:55.315392] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:3a7d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.296 [2024-05-16 20:04:55.315404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:08.296 #66 NEW cov: 12152 ft: 15219 corp: 41/1115b lim: 30 exec/s: 66 rss: 74Mb L: 22/30 MS: 1 ChangeBit- 00:06:08.296 [2024-05-16 20:04:55.364967] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:08.296 [2024-05-16 20:04:55.365095] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:08.296 [2024-05-16 20:04:55.365213] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:08.296 [2024-05-16 20:04:55.365335] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:08.296 [2024-05-16 20:04:55.365462] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300003d0a 00:06:08.296 [2024-05-16 20:04:55.365702] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.296 [2024-05-16 20:04:55.365728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:08.296 [2024-05-16 20:04:55.365782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.296 [2024-05-16 20:04:55.365795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:08.296 [2024-05-16 20:04:55.365846] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:3d3d810f cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.296 [2024-05-16 20:04:55.365859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:08.296 [2024-05-16 20:04:55.365911] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.296 [2024-05-16 20:04:55.365923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:08.296 [2024-05-16 20:04:55.365975] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:0a3d833d cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.296 [2024-05-16 20:04:55.365988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:08.296 #67 NEW cov: 12152 ft: 15244 corp: 42/1145b lim: 30 exec/s: 67 rss: 74Mb L: 30/30 MS: 1 ShuffleBytes- 00:06:08.296 [2024-05-16 20:04:55.405064] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:08.296 [2024-05-16 20:04:55.405192] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:08.296 [2024-05-16 20:04:55.405307] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:08.296 [2024-05-16 20:04:55.405422] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000453d 00:06:08.296 [2024-05-16 20:04:55.405547] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300003d0a 00:06:08.296 [2024-05-16 20:04:55.405779] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.296 [2024-05-16 20:04:55.405804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:08.296 [2024-05-16 20:04:55.405859] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.296 [2024-05-16 20:04:55.405873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:08.296 [2024-05-16 20:04:55.405925] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:3d3d810f cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.296 [2024-05-16 20:04:55.405937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:08.296 [2024-05-16 20:04:55.405989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.296 [2024-05-16 20:04:55.406004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:08.296 [2024-05-16 20:04:55.406057] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:0a3d833d cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.296 [2024-05-16 20:04:55.406069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:08.296 #68 NEW cov: 12152 ft: 15271 corp: 43/1175b lim: 30 exec/s: 68 rss: 74Mb L: 30/30 MS: 1 ChangeBinInt- 00:06:08.556 [2024-05-16 20:04:55.455220] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:08.556 [2024-05-16 20:04:55.455349] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:08.556 [2024-05-16 20:04:55.455472] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d43 00:06:08.556 [2024-05-16 20:04:55.455591] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003d3d 00:06:08.556 [2024-05-16 20:04:55.455710] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300003d0a 00:06:08.556 [2024-05-16 20:04:55.455940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3d3d8145 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.556 [2024-05-16 20:04:55.455965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:08.556 [2024-05-16 20:04:55.456018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.556 [2024-05-16 20:04:55.456031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:08.556 [2024-05-16 20:04:55.456082] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.556 [2024-05-16 20:04:55.456094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:08.556 [2024-05-16 20:04:55.456142] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:3d3d813d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.556 [2024-05-16 20:04:55.456155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:08.556 [2024-05-16 20:04:55.456207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:3d0a833d cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.556 [2024-05-16 20:04:55.456219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:08.556 #69 NEW cov: 12152 ft: 15277 corp: 44/1205b lim: 30 exec/s: 34 rss: 74Mb L: 30/30 MS: 1 ChangeByte- 00:06:08.556 #69 DONE cov: 12152 ft: 15277 corp: 44/1205b lim: 30 exec/s: 34 rss: 74Mb 00:06:08.556 ###### Recommended dictionary. ###### 00:06:08.556 "\001\000\000\000\000\000\000\000" # Uses: 2 00:06:08.556 ###### End of recommended dictionary. ###### 00:06:08.556 Done 69 runs in 2 second(s) 00:06:08.556 [2024-05-16 20:04:55.490809] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:08.556 20:04:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_1.conf /var/tmp/suppress_nvmf_fuzz 00:06:08.556 20:04:55 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:08.556 20:04:55 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:08.556 20:04:55 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:06:08.556 20:04:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=2 00:06:08.556 20:04:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:08.556 20:04:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:08.556 20:04:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:06:08.556 20:04:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_2.conf 00:06:08.556 20:04:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:08.556 20:04:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:08.556 20:04:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 2 00:06:08.556 20:04:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4402 00:06:08.556 20:04:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:06:08.556 20:04:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' 00:06:08.556 20:04:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4402"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:08.556 20:04:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:08.556 20:04:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:08.556 20:04:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' -c /tmp/fuzz_json_2.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 -Z 2 00:06:08.556 [2024-05-16 20:04:55.661200] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:08.556 [2024-05-16 20:04:55.661280] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1663885 ] 00:06:08.556 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.815 [2024-05-16 20:04:55.838765] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.815 [2024-05-16 20:04:55.903219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.815 [2024-05-16 20:04:55.961647] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:09.074 [2024-05-16 20:04:55.977607] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:09.074 [2024-05-16 20:04:55.977956] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4402 *** 00:06:09.074 INFO: Running with entropic power schedule (0xFF, 100). 00:06:09.074 INFO: Seed: 2332668755 00:06:09.074 INFO: Loaded 1 modules (357283 inline 8-bit counters): 357283 [0x299c0cc, 0x29f346f), 00:06:09.074 INFO: Loaded 1 PC tables (357283 PCs): 357283 [0x29f3470,0x2f66ea0), 00:06:09.074 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:06:09.074 INFO: A corpus is not provided, starting from an empty corpus 00:06:09.074 #2 INITED exec/s: 0 rss: 63Mb 00:06:09.074 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:09.074 This may also happen if the target rejected all inputs we tried so far 00:06:09.074 [2024-05-16 20:04:56.023067] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.074 [2024-05-16 20:04:56.023194] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.074 [2024-05-16 20:04:56.023316] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.074 [2024-05-16 20:04:56.023432] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.074 [2024-05-16 20:04:56.023666] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.074 [2024-05-16 20:04:56.023697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.074 [2024-05-16 20:04:56.023754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.074 [2024-05-16 20:04:56.023768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:09.074 [2024-05-16 20:04:56.023823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.074 [2024-05-16 20:04:56.023838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:09.074 [2024-05-16 20:04:56.023889] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.074 [2024-05-16 20:04:56.023902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:09.074 NEW_FUNC[1/685]: 0x4860d0 in fuzz_admin_identify_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:95 00:06:09.074 NEW_FUNC[2/685]: 0x4bf420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:09.074 #36 NEW cov: 11823 ft: 11821 corp: 2/34b lim: 35 exec/s: 0 rss: 70Mb L: 33/33 MS: 4 CopyPart-CrossOver-CopyPart-InsertRepeatedBytes- 00:06:09.074 [2024-05-16 20:04:56.163387] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.074 [2024-05-16 20:04:56.163523] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.074 [2024-05-16 20:04:56.163636] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.074 [2024-05-16 20:04:56.163745] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.074 [2024-05-16 20:04:56.163962] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.074 [2024-05-16 20:04:56.163996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.074 [2024-05-16 20:04:56.164050] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.074 [2024-05-16 20:04:56.164066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:09.074 [2024-05-16 20:04:56.164119] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.074 [2024-05-16 20:04:56.164132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:09.074 [2024-05-16 20:04:56.164184] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.074 [2024-05-16 20:04:56.164198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:09.074 #42 NEW cov: 11953 ft: 12423 corp: 3/67b lim: 35 exec/s: 0 rss: 72Mb L: 33/33 MS: 1 CrossOver- 00:06:09.074 [2024-05-16 20:04:56.213483] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.074 [2024-05-16 20:04:56.213610] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.074 [2024-05-16 20:04:56.213724] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.074 [2024-05-16 20:04:56.213835] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.074 [2024-05-16 20:04:56.214055] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.074 [2024-05-16 20:04:56.214086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.074 [2024-05-16 20:04:56.214138] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.074 [2024-05-16 20:04:56.214154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:09.074 [2024-05-16 20:04:56.214205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.074 [2024-05-16 20:04:56.214219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:09.074 [2024-05-16 20:04:56.214267] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:0a00000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.074 [2024-05-16 20:04:56.214281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:09.333 #43 NEW cov: 11959 ft: 12662 corp: 4/95b lim: 35 exec/s: 0 rss: 72Mb L: 28/33 MS: 1 EraseBytes- 00:06:09.333 [2024-05-16 20:04:56.253587] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.333 [2024-05-16 20:04:56.253711] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.333 [2024-05-16 20:04:56.253822] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.333 [2024-05-16 20:04:56.253932] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.333 [2024-05-16 20:04:56.254143] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.333 [2024-05-16 20:04:56.254170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.333 [2024-05-16 20:04:56.254224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.333 [2024-05-16 20:04:56.254239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:09.333 [2024-05-16 20:04:56.254290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000056 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.333 [2024-05-16 20:04:56.254305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:09.333 [2024-05-16 20:04:56.254353] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.333 [2024-05-16 20:04:56.254367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:09.333 #44 NEW cov: 12044 ft: 12910 corp: 5/129b lim: 35 exec/s: 0 rss: 72Mb L: 34/34 MS: 1 InsertByte- 00:06:09.333 [2024-05-16 20:04:56.293662] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.333 [2024-05-16 20:04:56.293785] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.333 [2024-05-16 20:04:56.293897] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.333 [2024-05-16 20:04:56.294009] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.333 [2024-05-16 20:04:56.294226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000024 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.333 [2024-05-16 20:04:56.294253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.333 [2024-05-16 20:04:56.294307] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.333 [2024-05-16 20:04:56.294320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:09.333 [2024-05-16 20:04:56.294371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000056 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.333 [2024-05-16 20:04:56.294385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:09.333 [2024-05-16 20:04:56.294433] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.333 [2024-05-16 20:04:56.294446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:09.333 #45 NEW cov: 12044 ft: 12982 corp: 6/163b lim: 35 exec/s: 0 rss: 72Mb L: 34/34 MS: 1 ChangeByte- 00:06:09.333 [2024-05-16 20:04:56.343814] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.333 [2024-05-16 20:04:56.343936] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.333 [2024-05-16 20:04:56.344048] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.333 [2024-05-16 20:04:56.344159] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.333 [2024-05-16 20:04:56.344369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.333 [2024-05-16 20:04:56.344395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.333 [2024-05-16 20:04:56.344446] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.333 [2024-05-16 20:04:56.344466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:09.333 [2024-05-16 20:04:56.344516] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.333 [2024-05-16 20:04:56.344529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:09.333 [2024-05-16 20:04:56.344577] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.333 [2024-05-16 20:04:56.344591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:09.333 #46 NEW cov: 12044 ft: 13020 corp: 7/197b lim: 35 exec/s: 0 rss: 72Mb L: 34/34 MS: 1 CopyPart- 00:06:09.333 [2024-05-16 20:04:56.393936] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.333 [2024-05-16 20:04:56.394058] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.333 [2024-05-16 20:04:56.394169] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.333 [2024-05-16 20:04:56.394377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.333 [2024-05-16 20:04:56.394403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.333 [2024-05-16 20:04:56.394459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.333 [2024-05-16 20:04:56.394476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:09.333 [2024-05-16 20:04:56.394526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:0a00000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.333 [2024-05-16 20:04:56.394540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:09.333 #47 NEW cov: 12044 ft: 13643 corp: 8/218b lim: 35 exec/s: 0 rss: 72Mb L: 21/34 MS: 1 EraseBytes- 00:06:09.333 [2024-05-16 20:04:56.434105] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.333 [2024-05-16 20:04:56.434222] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.333 [2024-05-16 20:04:56.434336] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.333 [2024-05-16 20:04:56.434444] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.333 [2024-05-16 20:04:56.434666] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.333 [2024-05-16 20:04:56.434692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.333 [2024-05-16 20:04:56.434746] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.333 [2024-05-16 20:04:56.434760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:09.334 [2024-05-16 20:04:56.434807] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00400000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.334 [2024-05-16 20:04:56.434820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:09.334 [2024-05-16 20:04:56.434870] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.334 [2024-05-16 20:04:56.434884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:09.334 #48 NEW cov: 12044 ft: 13736 corp: 9/251b lim: 35 exec/s: 0 rss: 72Mb L: 33/34 MS: 1 ChangeBit- 00:06:09.334 [2024-05-16 20:04:56.474184] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.334 [2024-05-16 20:04:56.474307] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.334 [2024-05-16 20:04:56.474437] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.334 [2024-05-16 20:04:56.474559] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.334 [2024-05-16 20:04:56.474775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.334 [2024-05-16 20:04:56.474801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.334 [2024-05-16 20:04:56.474852] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.334 [2024-05-16 20:04:56.474867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:09.334 [2024-05-16 20:04:56.474919] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.334 [2024-05-16 20:04:56.474933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:09.334 [2024-05-16 20:04:56.474987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.334 [2024-05-16 20:04:56.475000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:09.593 #49 NEW cov: 12044 ft: 13751 corp: 10/285b lim: 35 exec/s: 0 rss: 72Mb L: 34/34 MS: 1 ShuffleBytes- 00:06:09.593 [2024-05-16 20:04:56.524358] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.593 [2024-05-16 20:04:56.524494] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.593 [2024-05-16 20:04:56.524609] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.593 [2024-05-16 20:04:56.524733] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.593 [2024-05-16 20:04:56.524948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.593 [2024-05-16 20:04:56.524975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.593 [2024-05-16 20:04:56.525025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.593 [2024-05-16 20:04:56.525040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:09.593 [2024-05-16 20:04:56.525090] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:0000fa00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.593 [2024-05-16 20:04:56.525102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:09.593 [2024-05-16 20:04:56.525154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:0a00000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.593 [2024-05-16 20:04:56.525168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:09.593 #50 NEW cov: 12044 ft: 13801 corp: 11/313b lim: 35 exec/s: 0 rss: 72Mb L: 28/34 MS: 1 ChangeBinInt- 00:06:09.593 [2024-05-16 20:04:56.574498] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.593 [2024-05-16 20:04:56.574618] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.593 [2024-05-16 20:04:56.574728] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.593 [2024-05-16 20:04:56.574837] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.593 [2024-05-16 20:04:56.575052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.593 [2024-05-16 20:04:56.575078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.593 [2024-05-16 20:04:56.575131] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.593 [2024-05-16 20:04:56.575146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:09.593 [2024-05-16 20:04:56.575197] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.593 [2024-05-16 20:04:56.575210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:09.593 [2024-05-16 20:04:56.575261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000021 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.593 [2024-05-16 20:04:56.575277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:09.593 #51 NEW cov: 12044 ft: 13841 corp: 12/346b lim: 35 exec/s: 0 rss: 72Mb L: 33/34 MS: 1 ChangeBinInt- 00:06:09.593 [2024-05-16 20:04:56.614610] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.593 [2024-05-16 20:04:56.614730] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.593 [2024-05-16 20:04:56.614845] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.593 [2024-05-16 20:04:56.614956] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.593 [2024-05-16 20:04:56.615174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000040 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.593 [2024-05-16 20:04:56.615200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.593 [2024-05-16 20:04:56.615251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.593 [2024-05-16 20:04:56.615266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:09.593 [2024-05-16 20:04:56.615318] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:0a00000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.593 [2024-05-16 20:04:56.615332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:09.593 [2024-05-16 20:04:56.615381] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.593 [2024-05-16 20:04:56.615394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:09.593 #52 NEW cov: 12044 ft: 13865 corp: 13/379b lim: 35 exec/s: 0 rss: 72Mb L: 33/34 MS: 1 CopyPart- 00:06:09.593 [2024-05-16 20:04:56.664744] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.593 [2024-05-16 20:04:56.664864] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.593 [2024-05-16 20:04:56.664974] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.593 [2024-05-16 20:04:56.665083] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.593 [2024-05-16 20:04:56.665303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000040 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.593 [2024-05-16 20:04:56.665328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.593 [2024-05-16 20:04:56.665381] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.593 [2024-05-16 20:04:56.665394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:09.593 [2024-05-16 20:04:56.665446] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:0a00000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.593 [2024-05-16 20:04:56.665463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:09.593 [2024-05-16 20:04:56.665513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:32000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.593 [2024-05-16 20:04:56.665528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:09.593 #53 NEW cov: 12044 ft: 13875 corp: 14/412b lim: 35 exec/s: 0 rss: 72Mb L: 33/34 MS: 1 ChangeByte- 00:06:09.594 [2024-05-16 20:04:56.714888] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.594 [2024-05-16 20:04:56.715007] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.594 [2024-05-16 20:04:56.715118] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.594 [2024-05-16 20:04:56.715227] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.594 [2024-05-16 20:04:56.715452] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.594 [2024-05-16 20:04:56.715480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.594 [2024-05-16 20:04:56.715532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.594 [2024-05-16 20:04:56.715547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:09.594 [2024-05-16 20:04:56.715596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000056 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.594 [2024-05-16 20:04:56.715610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:09.594 [2024-05-16 20:04:56.715660] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.594 [2024-05-16 20:04:56.715674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:09.594 #54 NEW cov: 12044 ft: 13885 corp: 15/446b lim: 35 exec/s: 0 rss: 72Mb L: 34/34 MS: 1 ChangeBit- 00:06:09.852 [2024-05-16 20:04:56.755022] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.852 [2024-05-16 20:04:56.755143] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.852 [2024-05-16 20:04:56.755255] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.852 [2024-05-16 20:04:56.755366] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.852 [2024-05-16 20:04:56.755591] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000024 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.853 [2024-05-16 20:04:56.755618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.853 [2024-05-16 20:04:56.755671] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.853 [2024-05-16 20:04:56.755686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:09.853 [2024-05-16 20:04:56.755738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000056 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.853 [2024-05-16 20:04:56.755752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:09.853 [2024-05-16 20:04:56.755804] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.853 [2024-05-16 20:04:56.755818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:09.853 #55 NEW cov: 12044 ft: 13910 corp: 16/480b lim: 35 exec/s: 0 rss: 72Mb L: 34/34 MS: 1 ChangeByte- 00:06:09.853 [2024-05-16 20:04:56.805143] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.853 [2024-05-16 20:04:56.805263] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.853 [2024-05-16 20:04:56.805376] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.853 [2024-05-16 20:04:56.805491] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.853 [2024-05-16 20:04:56.805709] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.853 [2024-05-16 20:04:56.805735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.853 [2024-05-16 20:04:56.805787] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.853 [2024-05-16 20:04:56.805802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:09.853 [2024-05-16 20:04:56.805851] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.853 [2024-05-16 20:04:56.805865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:09.853 [2024-05-16 20:04:56.805916] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.853 [2024-05-16 20:04:56.805929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:09.853 #56 NEW cov: 12044 ft: 13925 corp: 17/514b lim: 35 exec/s: 0 rss: 72Mb L: 34/34 MS: 1 CrossOver- 00:06:09.853 [2024-05-16 20:04:56.845238] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.853 [2024-05-16 20:04:56.845359] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.853 [2024-05-16 20:04:56.845482] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.853 [2024-05-16 20:04:56.845592] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.853 [2024-05-16 20:04:56.845805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.853 [2024-05-16 20:04:56.845832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.853 [2024-05-16 20:04:56.845883] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.853 [2024-05-16 20:04:56.845898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:09.853 [2024-05-16 20:04:56.845949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.853 [2024-05-16 20:04:56.845963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:09.853 [2024-05-16 20:04:56.846015] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.853 [2024-05-16 20:04:56.846029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:09.853 #57 NEW cov: 12044 ft: 13934 corp: 18/548b lim: 35 exec/s: 0 rss: 72Mb L: 34/34 MS: 1 CopyPart- 00:06:09.853 [2024-05-16 20:04:56.885351] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.853 [2024-05-16 20:04:56.885486] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.853 [2024-05-16 20:04:56.885597] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.853 [2024-05-16 20:04:56.885807] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.853 [2024-05-16 20:04:56.885833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.853 [2024-05-16 20:04:56.885886] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:56000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.853 [2024-05-16 20:04:56.885902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:09.853 [2024-05-16 20:04:56.885952] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.853 [2024-05-16 20:04:56.885966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:09.853 NEW_FUNC[1/1]: 0x1a6ef60 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:09.853 #58 NEW cov: 12067 ft: 13976 corp: 19/572b lim: 35 exec/s: 0 rss: 72Mb L: 24/34 MS: 1 EraseBytes- 00:06:09.853 [2024-05-16 20:04:56.935492] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.853 [2024-05-16 20:04:56.935613] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.853 [2024-05-16 20:04:56.935726] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.853 [2024-05-16 20:04:56.935946] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.853 [2024-05-16 20:04:56.935973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.853 [2024-05-16 20:04:56.936025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.853 [2024-05-16 20:04:56.936039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:09.853 [2024-05-16 20:04:56.936091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.853 [2024-05-16 20:04:56.936105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:09.853 #59 NEW cov: 12067 ft: 14006 corp: 20/599b lim: 35 exec/s: 0 rss: 73Mb L: 27/34 MS: 1 EraseBytes- 00:06:09.853 [2024-05-16 20:04:56.975624] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.853 [2024-05-16 20:04:56.975747] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.853 [2024-05-16 20:04:56.975859] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.853 [2024-05-16 20:04:56.975974] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:09.853 [2024-05-16 20:04:56.976192] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000040 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.853 [2024-05-16 20:04:56.976219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.853 [2024-05-16 20:04:56.976271] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.853 [2024-05-16 20:04:56.976290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:09.853 [2024-05-16 20:04:56.976344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000056 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.853 [2024-05-16 20:04:56.976359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:09.853 [2024-05-16 20:04:56.976410] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.853 [2024-05-16 20:04:56.976424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:09.853 #60 NEW cov: 12067 ft: 14044 corp: 21/633b lim: 35 exec/s: 0 rss: 73Mb L: 34/34 MS: 1 ShuffleBytes- 00:06:10.112 [2024-05-16 20:04:57.015685] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.112 [2024-05-16 20:04:57.015809] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.112 [2024-05-16 20:04:57.016146] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.112 [2024-05-16 20:04:57.016172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.112 [2024-05-16 20:04:57.016227] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00350000 cdw11:3a00bc71 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.112 [2024-05-16 20:04:57.016242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.112 [2024-05-16 20:04:57.016294] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:060000a5 cdw11:0a00000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.112 [2024-05-16 20:04:57.016306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:10.112 #61 NEW cov: 12077 ft: 14108 corp: 22/654b lim: 35 exec/s: 61 rss: 73Mb L: 21/34 MS: 1 CMP- DE: "5\274q:S\245\006\000"- 00:06:10.112 [2024-05-16 20:04:57.065892] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.112 [2024-05-16 20:04:57.066017] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.112 [2024-05-16 20:04:57.066131] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.112 [2024-05-16 20:04:57.066244] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.112 [2024-05-16 20:04:57.066462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.112 [2024-05-16 20:04:57.066504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.112 [2024-05-16 20:04:57.066559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.112 [2024-05-16 20:04:57.066573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.112 [2024-05-16 20:04:57.066626] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000056 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.112 [2024-05-16 20:04:57.066641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:10.112 [2024-05-16 20:04:57.066691] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.112 [2024-05-16 20:04:57.066707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:10.112 #62 NEW cov: 12077 ft: 14121 corp: 23/688b lim: 35 exec/s: 62 rss: 73Mb L: 34/34 MS: 1 ShuffleBytes- 00:06:10.112 [2024-05-16 20:04:57.105972] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.112 [2024-05-16 20:04:57.106195] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.112 [2024-05-16 20:04:57.106309] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.112 [2024-05-16 20:04:57.106535] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:bc000035 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.112 [2024-05-16 20:04:57.106562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.112 [2024-05-16 20:04:57.106616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:53a5003a cdw11:00000600 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.112 [2024-05-16 20:04:57.106629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.112 [2024-05-16 20:04:57.106681] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.112 [2024-05-16 20:04:57.106696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:10.112 [2024-05-16 20:04:57.106749] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.112 [2024-05-16 20:04:57.106763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:10.112 #63 NEW cov: 12077 ft: 14155 corp: 24/722b lim: 35 exec/s: 63 rss: 73Mb L: 34/34 MS: 1 PersAutoDict- DE: "5\274q:S\245\006\000"- 00:06:10.112 [2024-05-16 20:04:57.156137] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.112 [2024-05-16 20:04:57.156254] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.112 [2024-05-16 20:04:57.156368] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.112 [2024-05-16 20:04:57.156488] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.112 [2024-05-16 20:04:57.156702] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:40000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.112 [2024-05-16 20:04:57.156729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.112 [2024-05-16 20:04:57.156780] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.112 [2024-05-16 20:04:57.156795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.113 [2024-05-16 20:04:57.156848] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:0a000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.113 [2024-05-16 20:04:57.156861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:10.113 [2024-05-16 20:04:57.156911] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.113 [2024-05-16 20:04:57.156925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:10.113 #64 NEW cov: 12077 ft: 14164 corp: 25/756b lim: 35 exec/s: 64 rss: 73Mb L: 34/34 MS: 1 CopyPart- 00:06:10.113 [2024-05-16 20:04:57.196234] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.113 [2024-05-16 20:04:57.196360] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.113 [2024-05-16 20:04:57.196481] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.113 [2024-05-16 20:04:57.196690] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.113 [2024-05-16 20:04:57.196716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.113 [2024-05-16 20:04:57.196769] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:56000000 cdw11:00007500 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.113 [2024-05-16 20:04:57.196783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.113 [2024-05-16 20:04:57.196835] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.113 [2024-05-16 20:04:57.196850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:10.113 #65 NEW cov: 12077 ft: 14181 corp: 26/780b lim: 35 exec/s: 65 rss: 73Mb L: 24/34 MS: 1 ChangeByte- 00:06:10.113 [2024-05-16 20:04:57.246417] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.113 [2024-05-16 20:04:57.246544] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.113 [2024-05-16 20:04:57.246659] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.113 [2024-05-16 20:04:57.246777] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.113 [2024-05-16 20:04:57.246999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000024 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.113 [2024-05-16 20:04:57.247025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.113 [2024-05-16 20:04:57.247083] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.113 [2024-05-16 20:04:57.247096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.113 [2024-05-16 20:04:57.247149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000056 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.113 [2024-05-16 20:04:57.247163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:10.113 [2024-05-16 20:04:57.247214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00010000 cdw11:02000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.113 [2024-05-16 20:04:57.247228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:10.372 #66 NEW cov: 12077 ft: 14220 corp: 27/814b lim: 35 exec/s: 66 rss: 73Mb L: 34/34 MS: 1 ChangeBinInt- 00:06:10.372 [2024-05-16 20:04:57.296560] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.372 [2024-05-16 20:04:57.296685] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.372 [2024-05-16 20:04:57.296812] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.372 [2024-05-16 20:04:57.296923] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.372 [2024-05-16 20:04:57.297139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.372 [2024-05-16 20:04:57.297168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.372 [2024-05-16 20:04:57.297220] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.372 [2024-05-16 20:04:57.297235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.372 [2024-05-16 20:04:57.297286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:0000fa00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.372 [2024-05-16 20:04:57.297299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:10.372 [2024-05-16 20:04:57.297351] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.372 [2024-05-16 20:04:57.297365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:10.372 #67 NEW cov: 12077 ft: 14221 corp: 28/842b lim: 35 exec/s: 67 rss: 73Mb L: 28/34 MS: 1 CopyPart- 00:06:10.372 [2024-05-16 20:04:57.346681] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.372 [2024-05-16 20:04:57.346801] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.372 [2024-05-16 20:04:57.346914] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.372 [2024-05-16 20:04:57.347131] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.372 [2024-05-16 20:04:57.347157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.372 [2024-05-16 20:04:57.347209] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:56000000 cdw11:00007500 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.372 [2024-05-16 20:04:57.347224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.372 [2024-05-16 20:04:57.347276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.372 [2024-05-16 20:04:57.347289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:10.372 #68 NEW cov: 12077 ft: 14227 corp: 29/866b lim: 35 exec/s: 68 rss: 74Mb L: 24/34 MS: 1 CopyPart- 00:06:10.372 [2024-05-16 20:04:57.396873] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.372 [2024-05-16 20:04:57.396995] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.372 [2024-05-16 20:04:57.397114] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.372 [2024-05-16 20:04:57.397225] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.372 [2024-05-16 20:04:57.397340] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.372 [2024-05-16 20:04:57.397564] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000040 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.372 [2024-05-16 20:04:57.397592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.372 [2024-05-16 20:04:57.397648] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.372 [2024-05-16 20:04:57.397661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.372 [2024-05-16 20:04:57.397715] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000056 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.372 [2024-05-16 20:04:57.397729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:10.372 [2024-05-16 20:04:57.397783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:23000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.372 [2024-05-16 20:04:57.397797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:10.372 [2024-05-16 20:04:57.397849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0a00000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.372 [2024-05-16 20:04:57.397863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:10.372 #69 NEW cov: 12077 ft: 14291 corp: 30/901b lim: 35 exec/s: 69 rss: 74Mb L: 35/35 MS: 1 InsertByte- 00:06:10.372 [2024-05-16 20:04:57.446964] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.372 [2024-05-16 20:04:57.447087] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.372 [2024-05-16 20:04:57.447207] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.372 [2024-05-16 20:04:57.447319] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.372 [2024-05-16 20:04:57.447543] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000024 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.372 [2024-05-16 20:04:57.447569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.372 [2024-05-16 20:04:57.447621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.372 [2024-05-16 20:04:57.447636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.372 [2024-05-16 20:04:57.447686] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000056 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.372 [2024-05-16 20:04:57.447701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:10.372 [2024-05-16 20:04:57.447752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:fa00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.372 [2024-05-16 20:04:57.447766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:10.372 #70 NEW cov: 12077 ft: 14301 corp: 31/935b lim: 35 exec/s: 70 rss: 74Mb L: 34/35 MS: 1 ChangeBinInt- 00:06:10.372 [2024-05-16 20:04:57.497192] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.372 [2024-05-16 20:04:57.497319] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.372 [2024-05-16 20:04:57.497438] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.372 [2024-05-16 20:04:57.497556] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.372 [2024-05-16 20:04:57.497666] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.372 [2024-05-16 20:04:57.497886] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.372 [2024-05-16 20:04:57.497912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.372 [2024-05-16 20:04:57.497968] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.372 [2024-05-16 20:04:57.497983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.372 [2024-05-16 20:04:57.498034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.372 [2024-05-16 20:04:57.498047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:10.372 [2024-05-16 20:04:57.498100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.372 [2024-05-16 20:04:57.498114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:10.372 [2024-05-16 20:04:57.498163] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0a002a0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.373 [2024-05-16 20:04:57.498178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:10.632 #71 NEW cov: 12077 ft: 14302 corp: 32/970b lim: 35 exec/s: 71 rss: 74Mb L: 35/35 MS: 1 InsertByte- 00:06:10.632 [2024-05-16 20:04:57.547302] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.632 [2024-05-16 20:04:57.547423] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.632 [2024-05-16 20:04:57.547559] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.632 [2024-05-16 20:04:57.547675] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.632 [2024-05-16 20:04:57.547787] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.632 [2024-05-16 20:04:57.548009] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.632 [2024-05-16 20:04:57.548035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.632 [2024-05-16 20:04:57.548089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.632 [2024-05-16 20:04:57.548103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.632 [2024-05-16 20:04:57.548157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000056 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.632 [2024-05-16 20:04:57.548171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:10.632 [2024-05-16 20:04:57.548225] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.632 [2024-05-16 20:04:57.548240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:10.632 [2024-05-16 20:04:57.548294] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0a00000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.632 [2024-05-16 20:04:57.548308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:10.632 #72 NEW cov: 12077 ft: 14308 corp: 33/1005b lim: 35 exec/s: 72 rss: 74Mb L: 35/35 MS: 1 CopyPart- 00:06:10.632 [2024-05-16 20:04:57.587361] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.632 [2024-05-16 20:04:57.587497] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.632 [2024-05-16 20:04:57.587717] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.632 [2024-05-16 20:04:57.587940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.632 [2024-05-16 20:04:57.587966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.632 [2024-05-16 20:04:57.588021] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.632 [2024-05-16 20:04:57.588035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.632 [2024-05-16 20:04:57.588088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000080 cdw11:0000fa00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.632 [2024-05-16 20:04:57.588100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:10.632 [2024-05-16 20:04:57.588149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:0a00000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.632 [2024-05-16 20:04:57.588162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:10.632 #73 NEW cov: 12077 ft: 14357 corp: 34/1033b lim: 35 exec/s: 73 rss: 74Mb L: 28/35 MS: 1 ChangeBit- 00:06:10.632 [2024-05-16 20:04:57.627499] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.632 [2024-05-16 20:04:57.627622] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.632 [2024-05-16 20:04:57.627738] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.632 [2024-05-16 20:04:57.627851] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.632 [2024-05-16 20:04:57.627965] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.632 [2024-05-16 20:04:57.628183] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.632 [2024-05-16 20:04:57.628209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.632 [2024-05-16 20:04:57.628261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.632 [2024-05-16 20:04:57.628276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.632 [2024-05-16 20:04:57.628328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.632 [2024-05-16 20:04:57.628342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:10.632 [2024-05-16 20:04:57.628394] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.632 [2024-05-16 20:04:57.628408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:10.632 [2024-05-16 20:04:57.628461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.632 [2024-05-16 20:04:57.628475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:10.632 #74 NEW cov: 12077 ft: 14366 corp: 35/1068b lim: 35 exec/s: 74 rss: 74Mb L: 35/35 MS: 1 CopyPart- 00:06:10.632 [2024-05-16 20:04:57.677676] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.632 [2024-05-16 20:04:57.677900] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.632 [2024-05-16 20:04:57.678015] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.632 [2024-05-16 20:04:57.678129] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.633 [2024-05-16 20:04:57.678344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:01060000 cdw11:8d00a553 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.633 [2024-05-16 20:04:57.678370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.633 [2024-05-16 20:04:57.678423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:a0000063 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.633 [2024-05-16 20:04:57.678436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.633 [2024-05-16 20:04:57.678493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.633 [2024-05-16 20:04:57.678508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:10.633 [2024-05-16 20:04:57.678559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.633 [2024-05-16 20:04:57.678573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:10.633 [2024-05-16 20:04:57.678626] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.633 [2024-05-16 20:04:57.678641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:10.633 #75 NEW cov: 12077 ft: 14384 corp: 36/1103b lim: 35 exec/s: 75 rss: 74Mb L: 35/35 MS: 1 CMP- DE: "\001\006\245S\215Fc\240"- 00:06:10.633 [2024-05-16 20:04:57.727818] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.633 [2024-05-16 20:04:57.727937] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.633 [2024-05-16 20:04:57.728044] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.633 [2024-05-16 20:04:57.728151] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.633 [2024-05-16 20:04:57.728258] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.633 [2024-05-16 20:04:57.728464] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.633 [2024-05-16 20:04:57.728490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.633 [2024-05-16 20:04:57.728545] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.633 [2024-05-16 20:04:57.728560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.633 [2024-05-16 20:04:57.728614] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.633 [2024-05-16 20:04:57.728628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:10.633 [2024-05-16 20:04:57.728682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.633 [2024-05-16 20:04:57.728696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:10.633 [2024-05-16 20:04:57.728747] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0a002a0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.633 [2024-05-16 20:04:57.728761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:10.633 #76 NEW cov: 12077 ft: 14387 corp: 37/1138b lim: 35 exec/s: 76 rss: 74Mb L: 35/35 MS: 1 CopyPart- 00:06:10.633 [2024-05-16 20:04:57.767874] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.633 [2024-05-16 20:04:57.767995] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.633 [2024-05-16 20:04:57.768103] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.633 [2024-05-16 20:04:57.768212] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.633 [2024-05-16 20:04:57.768421] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.633 [2024-05-16 20:04:57.768448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.633 [2024-05-16 20:04:57.768514] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00004000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.633 [2024-05-16 20:04:57.768529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.633 [2024-05-16 20:04:57.768581] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000056 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.633 [2024-05-16 20:04:57.768594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:10.633 [2024-05-16 20:04:57.768648] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.633 [2024-05-16 20:04:57.768662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:10.892 #77 NEW cov: 12077 ft: 14409 corp: 38/1172b lim: 35 exec/s: 77 rss: 74Mb L: 34/35 MS: 1 ChangeBit- 00:06:10.892 [2024-05-16 20:04:57.807955] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.893 [2024-05-16 20:04:57.808073] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.893 [2024-05-16 20:04:57.808182] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.893 [2024-05-16 20:04:57.808307] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.893 [2024-05-16 20:04:57.808527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000040 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.893 [2024-05-16 20:04:57.808553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.893 [2024-05-16 20:04:57.808607] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.893 [2024-05-16 20:04:57.808622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.893 [2024-05-16 20:04:57.808674] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:56000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.893 [2024-05-16 20:04:57.808691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:10.893 [2024-05-16 20:04:57.808744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.893 [2024-05-16 20:04:57.808758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:10.893 #78 NEW cov: 12077 ft: 14459 corp: 39/1206b lim: 35 exec/s: 78 rss: 74Mb L: 34/35 MS: 1 ShuffleBytes- 00:06:10.893 [2024-05-16 20:04:57.848102] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.893 [2024-05-16 20:04:57.848223] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.893 [2024-05-16 20:04:57.848337] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.893 [2024-05-16 20:04:57.848446] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.893 [2024-05-16 20:04:57.848658] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.893 [2024-05-16 20:04:57.848684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.893 [2024-05-16 20:04:57.848738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.893 [2024-05-16 20:04:57.848753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.893 [2024-05-16 20:04:57.848805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:0000fa00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.893 [2024-05-16 20:04:57.848818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:10.893 [2024-05-16 20:04:57.848868] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:06000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.893 [2024-05-16 20:04:57.848883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:10.893 #79 NEW cov: 12077 ft: 14464 corp: 40/1234b lim: 35 exec/s: 79 rss: 74Mb L: 28/35 MS: 1 ChangeBinInt- 00:06:10.893 [2024-05-16 20:04:57.898265] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.893 [2024-05-16 20:04:57.898479] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.893 [2024-05-16 20:04:57.898594] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.893 [2024-05-16 20:04:57.898822] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.893 [2024-05-16 20:04:57.898848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.893 [2024-05-16 20:04:57.898903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:000000fb cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.893 [2024-05-16 20:04:57.898915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.893 [2024-05-16 20:04:57.898971] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:0000fa00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.893 [2024-05-16 20:04:57.898986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:10.893 [2024-05-16 20:04:57.899038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:06000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.893 [2024-05-16 20:04:57.899055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:10.893 #80 NEW cov: 12077 ft: 14467 corp: 41/1262b lim: 35 exec/s: 80 rss: 74Mb L: 28/35 MS: 1 ChangeBinInt- 00:06:10.893 [2024-05-16 20:04:57.948364] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.893 [2024-05-16 20:04:57.948591] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.893 [2024-05-16 20:04:57.948699] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.893 [2024-05-16 20:04:57.948912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:35bc0000 cdw11:5300713a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.893 [2024-05-16 20:04:57.948938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.893 [2024-05-16 20:04:57.949051] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.893 [2024-05-16 20:04:57.949066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:10.893 [2024-05-16 20:04:57.949118] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:0a000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.893 [2024-05-16 20:04:57.949132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:10.893 NEW_FUNC[1/3]: 0x1168e30 in spdk_nvmf_ctrlr_identify_iocs_specific /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3029 00:06:10.893 NEW_FUNC[2/3]: 0x1169760 in nvmf_ctrlr_identify_iocs_nvm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:2985 00:06:10.893 #81 NEW cov: 12116 ft: 14506 corp: 42/1291b lim: 35 exec/s: 81 rss: 74Mb L: 29/35 MS: 1 PersAutoDict- DE: "5\274q:S\245\006\000"- 00:06:10.893 [2024-05-16 20:04:57.988482] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.893 [2024-05-16 20:04:57.988594] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.893 [2024-05-16 20:04:57.988701] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.893 [2024-05-16 20:04:57.988806] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.893 [2024-05-16 20:04:57.989020] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.893 [2024-05-16 20:04:57.989047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.893 [2024-05-16 20:04:57.989101] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.893 [2024-05-16 20:04:57.989116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.893 [2024-05-16 20:04:57.989170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.893 [2024-05-16 20:04:57.989184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:10.893 [2024-05-16 20:04:57.989237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.893 [2024-05-16 20:04:57.989250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:10.893 #82 NEW cov: 12116 ft: 14512 corp: 43/1321b lim: 35 exec/s: 41 rss: 74Mb L: 30/35 MS: 1 EraseBytes- 00:06:10.893 #82 DONE cov: 12116 ft: 14512 corp: 43/1321b lim: 35 exec/s: 41 rss: 74Mb 00:06:10.893 ###### Recommended dictionary. ###### 00:06:10.893 "5\274q:S\245\006\000" # Uses: 2 00:06:10.893 "\001\006\245S\215Fc\240" # Uses: 0 00:06:10.893 ###### End of recommended dictionary. ###### 00:06:10.893 Done 82 runs in 2 second(s) 00:06:10.893 [2024-05-16 20:04:58.023060] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:11.152 20:04:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_2.conf /var/tmp/suppress_nvmf_fuzz 00:06:11.152 20:04:58 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:11.152 20:04:58 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:11.152 20:04:58 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:06:11.152 20:04:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=3 00:06:11.152 20:04:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:11.152 20:04:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:11.152 20:04:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:06:11.152 20:04:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_3.conf 00:06:11.152 20:04:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:11.152 20:04:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:11.152 20:04:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 3 00:06:11.152 20:04:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4403 00:06:11.152 20:04:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:06:11.152 20:04:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' 00:06:11.152 20:04:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4403"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:11.152 20:04:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:11.152 20:04:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:11.152 20:04:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' -c /tmp/fuzz_json_3.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 -Z 3 00:06:11.152 [2024-05-16 20:04:58.191047] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:11.152 [2024-05-16 20:04:58.191129] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1664327 ] 00:06:11.152 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.411 [2024-05-16 20:04:58.382464] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.411 [2024-05-16 20:04:58.446985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.411 [2024-05-16 20:04:58.505426] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:11.411 [2024-05-16 20:04:58.521392] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:11.411 [2024-05-16 20:04:58.521740] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4403 *** 00:06:11.411 INFO: Running with entropic power schedule (0xFF, 100). 00:06:11.411 INFO: Seed: 581700861 00:06:11.411 INFO: Loaded 1 modules (357283 inline 8-bit counters): 357283 [0x299c0cc, 0x29f346f), 00:06:11.411 INFO: Loaded 1 PC tables (357283 PCs): 357283 [0x29f3470,0x2f66ea0), 00:06:11.411 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:06:11.411 INFO: A corpus is not provided, starting from an empty corpus 00:06:11.411 #2 INITED exec/s: 0 rss: 64Mb 00:06:11.411 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:11.411 This may also happen if the target rejected all inputs we tried so far 00:06:11.670 [2024-05-16 20:04:58.566994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:11.670 [2024-05-16 20:04:58.567022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:11.670 NEW_FUNC[1/691]: 0x487da0 in fuzz_admin_abort_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:114 00:06:11.671 NEW_FUNC[2/691]: 0x4bf420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:11.671 #7 NEW cov: 11967 ft: 11968 corp: 2/7b lim: 20 exec/s: 0 rss: 71Mb L: 6/6 MS: 5 ChangeByte-CrossOver-ChangeBit-ChangeBinInt-InsertRepeatedBytes- 00:06:11.671 #9 NEW cov: 12120 ft: 13010 corp: 3/25b lim: 20 exec/s: 0 rss: 71Mb L: 18/18 MS: 2 CopyPart-InsertRepeatedBytes- 00:06:11.671 [2024-05-16 20:04:58.737814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:11.671 [2024-05-16 20:04:58.737847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:11.671 #10 NEW cov: 12126 ft: 13258 corp: 4/45b lim: 20 exec/s: 0 rss: 71Mb L: 20/20 MS: 1 InsertRepeatedBytes- 00:06:11.671 [2024-05-16 20:04:58.787558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:11.671 [2024-05-16 20:04:58.787583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:11.671 #11 NEW cov: 12211 ft: 13591 corp: 5/52b lim: 20 exec/s: 0 rss: 71Mb L: 7/20 MS: 1 InsertByte- 00:06:11.930 #12 NEW cov: 12211 ft: 13682 corp: 6/71b lim: 20 exec/s: 0 rss: 71Mb L: 19/20 MS: 1 InsertByte- 00:06:11.930 #13 NEW cov: 12211 ft: 13782 corp: 7/89b lim: 20 exec/s: 0 rss: 72Mb L: 18/20 MS: 1 ChangeBit- 00:06:11.930 #14 NEW cov: 12211 ft: 13902 corp: 8/95b lim: 20 exec/s: 0 rss: 72Mb L: 6/20 MS: 1 ChangeByte- 00:06:11.930 #15 NEW cov: 12211 ft: 13929 corp: 9/114b lim: 20 exec/s: 0 rss: 72Mb L: 19/20 MS: 1 CrossOver- 00:06:11.930 #16 NEW cov: 12211 ft: 13981 corp: 10/134b lim: 20 exec/s: 0 rss: 72Mb L: 20/20 MS: 1 InsertByte- 00:06:11.930 [2024-05-16 20:04:59.058774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:11.930 [2024-05-16 20:04:59.058801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.189 #17 NEW cov: 12211 ft: 14013 corp: 11/154b lim: 20 exec/s: 0 rss: 72Mb L: 20/20 MS: 1 ChangeByte- 00:06:12.189 [2024-05-16 20:04:59.108429] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:12.189 [2024-05-16 20:04:59.108461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.189 #18 NEW cov: 12211 ft: 14031 corp: 12/160b lim: 20 exec/s: 0 rss: 72Mb L: 6/20 MS: 1 CopyPart- 00:06:12.189 #19 NEW cov: 12211 ft: 14071 corp: 13/178b lim: 20 exec/s: 0 rss: 72Mb L: 18/20 MS: 1 ShuffleBytes- 00:06:12.189 [2024-05-16 20:04:59.188788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:12.189 [2024-05-16 20:04:59.188815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.189 #20 NEW cov: 12216 ft: 14274 corp: 14/189b lim: 20 exec/s: 0 rss: 72Mb L: 11/20 MS: 1 EraseBytes- 00:06:12.189 #21 NEW cov: 12216 ft: 14352 corp: 15/209b lim: 20 exec/s: 0 rss: 72Mb L: 20/20 MS: 1 CrossOver- 00:06:12.189 #24 NEW cov: 12216 ft: 14368 corp: 16/215b lim: 20 exec/s: 0 rss: 72Mb L: 6/20 MS: 3 CopyPart-InsertByte-CMP- DE: "\001\000\000\004"- 00:06:12.447 #25 NEW cov: 12216 ft: 14465 corp: 17/221b lim: 20 exec/s: 0 rss: 72Mb L: 6/20 MS: 1 CrossOver- 00:06:12.447 [2024-05-16 20:04:59.379641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:12.447 [2024-05-16 20:04:59.379667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.447 #26 NEW cov: 12216 ft: 14489 corp: 18/241b lim: 20 exec/s: 0 rss: 72Mb L: 20/20 MS: 1 ChangeByte- 00:06:12.447 #27 NEW cov: 12216 ft: 14510 corp: 19/260b lim: 20 exec/s: 0 rss: 72Mb L: 19/20 MS: 1 PersAutoDict- DE: "\001\000\000\004"- 00:06:12.447 [2024-05-16 20:04:59.459865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:12.447 [2024-05-16 20:04:59.459890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.447 NEW_FUNC[1/1]: 0x1a6ef60 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:12.448 #28 NEW cov: 12239 ft: 14539 corp: 20/280b lim: 20 exec/s: 0 rss: 72Mb L: 20/20 MS: 1 ChangeBit- 00:06:12.448 #29 NEW cov: 12239 ft: 14554 corp: 21/300b lim: 20 exec/s: 0 rss: 72Mb L: 20/20 MS: 1 CopyPart- 00:06:12.448 #30 NEW cov: 12239 ft: 14558 corp: 22/320b lim: 20 exec/s: 30 rss: 72Mb L: 20/20 MS: 1 ShuffleBytes- 00:06:12.707 #31 NEW cov: 12239 ft: 14588 corp: 23/327b lim: 20 exec/s: 31 rss: 72Mb L: 7/20 MS: 1 ChangeBit- 00:06:12.707 #32 NEW cov: 12239 ft: 14597 corp: 24/345b lim: 20 exec/s: 32 rss: 72Mb L: 18/20 MS: 1 ChangeBinInt- 00:06:12.707 [2024-05-16 20:04:59.700675] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:12.707 [2024-05-16 20:04:59.700702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.707 NEW_FUNC[1/3]: 0x13357c0 in nvmf_transport_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/transport.c:777 00:06:12.707 NEW_FUNC[2/3]: 0x1356b00 in nvmf_tcp_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/tcp.c:3517 00:06:12.707 #33 NEW cov: 12318 ft: 14749 corp: 25/365b lim: 20 exec/s: 33 rss: 72Mb L: 20/20 MS: 1 InsertRepeatedBytes- 00:06:12.707 [2024-05-16 20:04:59.750794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:12.707 [2024-05-16 20:04:59.750820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.707 #34 NEW cov: 12318 ft: 14762 corp: 26/385b lim: 20 exec/s: 34 rss: 73Mb L: 20/20 MS: 1 ShuffleBytes- 00:06:12.707 #35 NEW cov: 12318 ft: 14826 corp: 27/395b lim: 20 exec/s: 35 rss: 73Mb L: 10/20 MS: 1 InsertRepeatedBytes- 00:06:12.707 [2024-05-16 20:04:59.840985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:12.707 [2024-05-16 20:04:59.841011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.993 #36 NEW cov: 12321 ft: 14966 corp: 28/415b lim: 20 exec/s: 36 rss: 73Mb L: 20/20 MS: 1 CMP- DE: "\001\000\000\020"- 00:06:12.993 [2024-05-16 20:04:59.891103] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:12.993 [2024-05-16 20:04:59.891129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.993 #37 NEW cov: 12321 ft: 14969 corp: 29/435b lim: 20 exec/s: 37 rss: 73Mb L: 20/20 MS: 1 CrossOver- 00:06:12.993 [2024-05-16 20:04:59.931292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:12.993 [2024-05-16 20:04:59.931318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.993 #38 NEW cov: 12321 ft: 15048 corp: 30/455b lim: 20 exec/s: 38 rss: 73Mb L: 20/20 MS: 1 ChangeByte- 00:06:12.993 #39 NEW cov: 12321 ft: 15074 corp: 31/474b lim: 20 exec/s: 39 rss: 73Mb L: 19/20 MS: 1 ChangeByte- 00:06:12.993 #40 NEW cov: 12321 ft: 15079 corp: 32/480b lim: 20 exec/s: 40 rss: 73Mb L: 6/20 MS: 1 PersAutoDict- DE: "\001\000\000\004"- 00:06:12.993 #41 NEW cov: 12321 ft: 15104 corp: 33/488b lim: 20 exec/s: 41 rss: 73Mb L: 8/20 MS: 1 InsertByte- 00:06:13.252 #42 NEW cov: 12321 ft: 15117 corp: 34/495b lim: 20 exec/s: 42 rss: 73Mb L: 7/20 MS: 1 InsertByte- 00:06:13.252 #43 NEW cov: 12321 ft: 15126 corp: 35/505b lim: 20 exec/s: 43 rss: 74Mb L: 10/20 MS: 1 ChangeASCIIInt- 00:06:13.252 #44 NEW cov: 12321 ft: 15130 corp: 36/525b lim: 20 exec/s: 44 rss: 74Mb L: 20/20 MS: 1 ChangeBit- 00:06:13.252 #45 NEW cov: 12321 ft: 15163 corp: 37/529b lim: 20 exec/s: 45 rss: 74Mb L: 4/20 MS: 1 EraseBytes- 00:06:13.252 #46 NEW cov: 12321 ft: 15168 corp: 38/548b lim: 20 exec/s: 46 rss: 74Mb L: 19/20 MS: 1 ShuffleBytes- 00:06:13.252 #47 NEW cov: 12321 ft: 15176 corp: 39/567b lim: 20 exec/s: 47 rss: 74Mb L: 19/20 MS: 1 CMP- DE: "\001\006\245X\340\245\206\010"- 00:06:13.252 [2024-05-16 20:05:00.392557] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:13.252 [2024-05-16 20:05:00.392594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:13.510 #48 NEW cov: 12321 ft: 15181 corp: 40/587b lim: 20 exec/s: 48 rss: 74Mb L: 20/20 MS: 1 ChangeBinInt- 00:06:13.510 #49 NEW cov: 12321 ft: 15219 corp: 41/607b lim: 20 exec/s: 49 rss: 74Mb L: 20/20 MS: 1 ChangeBinInt- 00:06:13.511 #50 NEW cov: 12321 ft: 15244 corp: 42/625b lim: 20 exec/s: 50 rss: 74Mb L: 18/20 MS: 1 ChangeBinInt- 00:06:13.511 [2024-05-16 20:05:00.522897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:13.511 [2024-05-16 20:05:00.522925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:13.511 #51 NEW cov: 12321 ft: 15258 corp: 43/645b lim: 20 exec/s: 25 rss: 74Mb L: 20/20 MS: 1 ShuffleBytes- 00:06:13.511 #51 DONE cov: 12321 ft: 15258 corp: 43/645b lim: 20 exec/s: 25 rss: 74Mb 00:06:13.511 ###### Recommended dictionary. ###### 00:06:13.511 "\001\000\000\004" # Uses: 2 00:06:13.511 "\001\000\000\020" # Uses: 0 00:06:13.511 "\001\006\245X\340\245\206\010" # Uses: 0 00:06:13.511 ###### End of recommended dictionary. ###### 00:06:13.511 Done 51 runs in 2 second(s) 00:06:13.511 [2024-05-16 20:05:00.559725] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:13.769 20:05:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_3.conf /var/tmp/suppress_nvmf_fuzz 00:06:13.769 20:05:00 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:13.769 20:05:00 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:13.769 20:05:00 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:06:13.770 20:05:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=4 00:06:13.770 20:05:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:13.770 20:05:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:13.770 20:05:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:13.770 20:05:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_4.conf 00:06:13.770 20:05:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:13.770 20:05:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:13.770 20:05:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 4 00:06:13.770 20:05:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4404 00:06:13.770 20:05:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:13.770 20:05:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' 00:06:13.770 20:05:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4404"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:13.770 20:05:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:13.770 20:05:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:13.770 20:05:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' -c /tmp/fuzz_json_4.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 -Z 4 00:06:13.770 [2024-05-16 20:05:00.730932] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:13.770 [2024-05-16 20:05:00.731012] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1664761 ] 00:06:13.770 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.770 [2024-05-16 20:05:00.911278] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.028 [2024-05-16 20:05:00.976704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.028 [2024-05-16 20:05:01.035244] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:14.028 [2024-05-16 20:05:01.051212] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:14.028 [2024-05-16 20:05:01.051570] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4404 *** 00:06:14.028 INFO: Running with entropic power schedule (0xFF, 100). 00:06:14.028 INFO: Seed: 3109700849 00:06:14.028 INFO: Loaded 1 modules (357283 inline 8-bit counters): 357283 [0x299c0cc, 0x29f346f), 00:06:14.028 INFO: Loaded 1 PC tables (357283 PCs): 357283 [0x29f3470,0x2f66ea0), 00:06:14.028 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:14.028 INFO: A corpus is not provided, starting from an empty corpus 00:06:14.028 #2 INITED exec/s: 0 rss: 64Mb 00:06:14.028 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:14.028 This may also happen if the target rejected all inputs we tried so far 00:06:14.028 [2024-05-16 20:05:01.097224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8aff0a0a cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.028 [2024-05-16 20:05:01.097252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.028 [2024-05-16 20:05:01.097308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.029 [2024-05-16 20:05:01.097320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.029 [2024-05-16 20:05:01.097374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.029 [2024-05-16 20:05:01.097387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:14.286 NEW_FUNC[1/686]: 0x488e90 in fuzz_admin_create_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:126 00:06:14.286 NEW_FUNC[2/686]: 0x4bf420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:14.286 #17 NEW cov: 11845 ft: 11836 corp: 2/24b lim: 35 exec/s: 0 rss: 71Mb L: 23/23 MS: 5 ChangeBit-CrossOver-CrossOver-CrossOver-InsertRepeatedBytes- 00:06:14.286 [2024-05-16 20:05:01.237327] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.286 [2024-05-16 20:05:01.237362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.286 #20 NEW cov: 11975 ft: 13099 corp: 3/31b lim: 35 exec/s: 0 rss: 71Mb L: 7/23 MS: 3 ChangeByte-ChangeASCIIInt-InsertRepeatedBytes- 00:06:14.286 [2024-05-16 20:05:01.277299] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.287 [2024-05-16 20:05:01.277323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.287 #22 NEW cov: 11981 ft: 13507 corp: 4/42b lim: 35 exec/s: 0 rss: 71Mb L: 11/23 MS: 2 InsertByte-InsertRepeatedBytes- 00:06:14.287 [2024-05-16 20:05:01.317403] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:008c0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.287 [2024-05-16 20:05:01.317427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.287 #23 NEW cov: 12066 ft: 13802 corp: 5/49b lim: 35 exec/s: 0 rss: 71Mb L: 7/23 MS: 1 ChangeByte- 00:06:14.287 [2024-05-16 20:05:01.367569] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:49000000 cdw11:8c000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.287 [2024-05-16 20:05:01.367593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.287 #24 NEW cov: 12066 ft: 13897 corp: 6/57b lim: 35 exec/s: 0 rss: 71Mb L: 8/23 MS: 1 InsertByte- 00:06:14.287 [2024-05-16 20:05:01.417718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.287 [2024-05-16 20:05:01.417742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.545 #25 NEW cov: 12066 ft: 13959 corp: 7/64b lim: 35 exec/s: 0 rss: 71Mb L: 7/23 MS: 1 ShuffleBytes- 00:06:14.545 [2024-05-16 20:05:01.457806] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.545 [2024-05-16 20:05:01.457831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.545 #26 NEW cov: 12066 ft: 14027 corp: 8/71b lim: 35 exec/s: 0 rss: 72Mb L: 7/23 MS: 1 CopyPart- 00:06:14.545 [2024-05-16 20:05:01.497890] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:48000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.545 [2024-05-16 20:05:01.497914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.545 #27 NEW cov: 12066 ft: 14106 corp: 9/78b lim: 35 exec/s: 0 rss: 72Mb L: 7/23 MS: 1 ChangeByte- 00:06:14.545 [2024-05-16 20:05:01.548092] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00002f00 cdw11:00000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.545 [2024-05-16 20:05:01.548116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.545 #30 NEW cov: 12066 ft: 14143 corp: 10/90b lim: 35 exec/s: 0 rss: 72Mb L: 12/23 MS: 3 EraseBytes-ChangeByte-CrossOver- 00:06:14.545 [2024-05-16 20:05:01.598165] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.545 [2024-05-16 20:05:01.598189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.545 #31 NEW cov: 12066 ft: 14261 corp: 11/103b lim: 35 exec/s: 0 rss: 72Mb L: 13/23 MS: 1 CopyPart- 00:06:14.545 [2024-05-16 20:05:01.638644] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8aff0a0a cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.545 [2024-05-16 20:05:01.638673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.545 [2024-05-16 20:05:01.638728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.545 [2024-05-16 20:05:01.638739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.545 [2024-05-16 20:05:01.638793] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffff3a cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.545 [2024-05-16 20:05:01.638805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:14.545 #32 NEW cov: 12066 ft: 14281 corp: 12/126b lim: 35 exec/s: 0 rss: 72Mb L: 23/23 MS: 1 ChangeByte- 00:06:14.545 [2024-05-16 20:05:01.688857] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8c000000 cdw11:00000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.545 [2024-05-16 20:05:01.688882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.546 [2024-05-16 20:05:01.688939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:74747474 cdw11:74740002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.546 [2024-05-16 20:05:01.688951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.546 [2024-05-16 20:05:01.689005] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:74747474 cdw11:74740002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.546 [2024-05-16 20:05:01.689017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:14.804 #36 NEW cov: 12066 ft: 14302 corp: 13/147b lim: 35 exec/s: 0 rss: 72Mb L: 21/23 MS: 4 EraseBytes-ChangeByte-CopyPart-InsertRepeatedBytes- 00:06:14.804 [2024-05-16 20:05:01.728785] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.804 [2024-05-16 20:05:01.728809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.804 [2024-05-16 20:05:01.728863] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.804 [2024-05-16 20:05:01.728874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.804 #37 NEW cov: 12066 ft: 14529 corp: 14/162b lim: 35 exec/s: 0 rss: 72Mb L: 15/23 MS: 1 CMP- DE: "\000\001"- 00:06:14.804 [2024-05-16 20:05:01.778756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0000008c cdw11:49000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.804 [2024-05-16 20:05:01.778782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.804 #38 NEW cov: 12066 ft: 14684 corp: 15/170b lim: 35 exec/s: 0 rss: 72Mb L: 8/23 MS: 1 ShuffleBytes- 00:06:14.804 [2024-05-16 20:05:01.828875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:2b000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.804 [2024-05-16 20:05:01.828902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.804 #39 NEW cov: 12066 ft: 14698 corp: 16/177b lim: 35 exec/s: 0 rss: 72Mb L: 7/23 MS: 1 ChangeByte- 00:06:14.804 [2024-05-16 20:05:01.869333] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.804 [2024-05-16 20:05:01.869357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.804 [2024-05-16 20:05:01.869414] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:d7460000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.804 [2024-05-16 20:05:01.869426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.804 [2024-05-16 20:05:01.869480] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:7f00fcb7 cdw11:00370000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.804 [2024-05-16 20:05:01.869493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:14.804 #40 NEW cov: 12066 ft: 14712 corp: 17/198b lim: 35 exec/s: 0 rss: 72Mb L: 21/23 MS: 1 CMP- DE: "\327F\020\374\267\177\000\000"- 00:06:14.804 [2024-05-16 20:05:01.909270] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.804 [2024-05-16 20:05:01.909295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.804 [2024-05-16 20:05:01.909349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:54000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.804 [2024-05-16 20:05:01.909361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.804 #41 NEW cov: 12066 ft: 14760 corp: 18/213b lim: 35 exec/s: 0 rss: 72Mb L: 15/23 MS: 1 ChangeByte- 00:06:15.062 [2024-05-16 20:05:01.959586] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8c000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.062 [2024-05-16 20:05:01.959611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.062 [2024-05-16 20:05:01.959664] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:74747474 cdw11:74740002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.062 [2024-05-16 20:05:01.959677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.062 [2024-05-16 20:05:01.959732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:74747474 cdw11:74740002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.062 [2024-05-16 20:05:01.959745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:15.062 NEW_FUNC[1/1]: 0x1a6ef60 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:15.062 #42 NEW cov: 12089 ft: 14798 corp: 19/234b lim: 35 exec/s: 0 rss: 72Mb L: 21/23 MS: 1 PersAutoDict- DE: "\000\001"- 00:06:15.062 [2024-05-16 20:05:02.009401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:003f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.062 [2024-05-16 20:05:02.009427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.062 #43 NEW cov: 12089 ft: 14844 corp: 20/242b lim: 35 exec/s: 0 rss: 72Mb L: 8/23 MS: 1 InsertByte- 00:06:15.062 [2024-05-16 20:05:02.049884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8c000000 cdw11:00250000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.062 [2024-05-16 20:05:02.049911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.062 [2024-05-16 20:05:02.049967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:74747474 cdw11:74740002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.062 [2024-05-16 20:05:02.049980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.062 [2024-05-16 20:05:02.050033] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:74747474 cdw11:74740002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.062 [2024-05-16 20:05:02.050049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:15.062 #44 NEW cov: 12089 ft: 14871 corp: 21/263b lim: 35 exec/s: 44 rss: 72Mb L: 21/23 MS: 1 ChangeByte- 00:06:15.062 [2024-05-16 20:05:02.100231] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:10101010 cdw11:10100000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.062 [2024-05-16 20:05:02.100255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.062 [2024-05-16 20:05:02.100307] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:10101010 cdw11:10100000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.062 [2024-05-16 20:05:02.100319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.062 [2024-05-16 20:05:02.100372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:10101010 cdw11:10100000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.062 [2024-05-16 20:05:02.100382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:15.062 [2024-05-16 20:05:02.100433] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:10101010 cdw11:10100000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.062 [2024-05-16 20:05:02.100444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:15.062 #47 NEW cov: 12089 ft: 15183 corp: 22/297b lim: 35 exec/s: 47 rss: 72Mb L: 34/34 MS: 3 EraseBytes-ChangeBit-InsertRepeatedBytes- 00:06:15.062 [2024-05-16 20:05:02.139757] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:008c0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.062 [2024-05-16 20:05:02.139782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.062 #48 NEW cov: 12089 ft: 15195 corp: 23/304b lim: 35 exec/s: 48 rss: 72Mb L: 7/34 MS: 1 ChangeByte- 00:06:15.062 [2024-05-16 20:05:02.179899] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0000008c cdw11:49000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.062 [2024-05-16 20:05:02.179923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.320 #49 NEW cov: 12089 ft: 15202 corp: 24/314b lim: 35 exec/s: 49 rss: 72Mb L: 10/34 MS: 1 PersAutoDict- DE: "\000\001"- 00:06:15.320 [2024-05-16 20:05:02.230429] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:c0000000 cdw11:00000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.320 [2024-05-16 20:05:02.230453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.320 [2024-05-16 20:05:02.230529] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:74747474 cdw11:74740002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.320 [2024-05-16 20:05:02.230541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.320 [2024-05-16 20:05:02.230595] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:74747474 cdw11:74740002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.320 [2024-05-16 20:05:02.230606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:15.320 #50 NEW cov: 12089 ft: 15217 corp: 25/335b lim: 35 exec/s: 50 rss: 72Mb L: 21/34 MS: 1 ChangeByte- 00:06:15.320 [2024-05-16 20:05:02.270572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8aff0a0a cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.320 [2024-05-16 20:05:02.270599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.320 [2024-05-16 20:05:02.270656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.320 [2024-05-16 20:05:02.270668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.320 [2024-05-16 20:05:02.270721] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:d0ffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.320 [2024-05-16 20:05:02.270733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:15.320 #51 NEW cov: 12089 ft: 15233 corp: 26/359b lim: 35 exec/s: 51 rss: 72Mb L: 24/34 MS: 1 InsertByte- 00:06:15.320 [2024-05-16 20:05:02.310641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8aff0a0a cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.320 [2024-05-16 20:05:02.310665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.320 [2024-05-16 20:05:02.310720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.320 [2024-05-16 20:05:02.310731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.320 [2024-05-16 20:05:02.310786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:3dffff3a cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.320 [2024-05-16 20:05:02.310798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:15.320 #52 NEW cov: 12089 ft: 15269 corp: 27/382b lim: 35 exec/s: 52 rss: 72Mb L: 23/34 MS: 1 ChangeByte- 00:06:15.320 [2024-05-16 20:05:02.360829] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8c00007b cdw11:00250000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.320 [2024-05-16 20:05:02.360853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.320 [2024-05-16 20:05:02.360910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:74747474 cdw11:74740002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.320 [2024-05-16 20:05:02.360922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.320 [2024-05-16 20:05:02.360976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:74747474 cdw11:74740002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.320 [2024-05-16 20:05:02.360987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:15.320 #53 NEW cov: 12089 ft: 15278 corp: 28/403b lim: 35 exec/s: 53 rss: 72Mb L: 21/34 MS: 1 ChangeByte- 00:06:15.320 [2024-05-16 20:05:02.410969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8c000000 cdw11:00ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.320 [2024-05-16 20:05:02.410996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.320 [2024-05-16 20:05:02.411049] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:0001fff6 cdw11:74740002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.320 [2024-05-16 20:05:02.411061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.320 [2024-05-16 20:05:02.411116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:74747474 cdw11:74740002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.320 [2024-05-16 20:05:02.411130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:15.320 #54 NEW cov: 12089 ft: 15285 corp: 29/428b lim: 35 exec/s: 54 rss: 72Mb L: 25/34 MS: 1 CMP- DE: "\377\377\377\366"- 00:06:15.320 [2024-05-16 20:05:02.450698] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.320 [2024-05-16 20:05:02.450723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.578 #55 NEW cov: 12089 ft: 15300 corp: 30/435b lim: 35 exec/s: 55 rss: 72Mb L: 7/34 MS: 1 CrossOver- 00:06:15.578 [2024-05-16 20:05:02.490837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:2b000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.578 [2024-05-16 20:05:02.490861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.578 #56 NEW cov: 12089 ft: 15314 corp: 31/442b lim: 35 exec/s: 56 rss: 73Mb L: 7/34 MS: 1 ChangeBinInt- 00:06:15.578 [2024-05-16 20:05:02.541369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8aff0a0a cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.578 [2024-05-16 20:05:02.541392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.578 [2024-05-16 20:05:02.541448] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.578 [2024-05-16 20:05:02.541466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.578 [2024-05-16 20:05:02.541520] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:008cff00 cdw11:00ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.578 [2024-05-16 20:05:02.541531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:15.578 #57 NEW cov: 12089 ft: 15327 corp: 32/465b lim: 35 exec/s: 57 rss: 73Mb L: 23/34 MS: 1 CrossOver- 00:06:15.578 [2024-05-16 20:05:02.581140] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:2b000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.578 [2024-05-16 20:05:02.581164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.578 #58 NEW cov: 12089 ft: 15407 corp: 33/472b lim: 35 exec/s: 58 rss: 73Mb L: 7/34 MS: 1 ShuffleBytes- 00:06:15.578 [2024-05-16 20:05:02.631754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8c000000 cdw11:00250000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.578 [2024-05-16 20:05:02.631777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.578 [2024-05-16 20:05:02.631832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:74747474 cdw11:74740002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.578 [2024-05-16 20:05:02.631844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.578 [2024-05-16 20:05:02.631899] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:74747474 cdw11:00000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.578 [2024-05-16 20:05:02.631910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:15.578 [2024-05-16 20:05:02.631963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:49007474 cdw11:8c000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.578 [2024-05-16 20:05:02.631974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:15.578 #59 NEW cov: 12089 ft: 15435 corp: 34/501b lim: 35 exec/s: 59 rss: 73Mb L: 29/34 MS: 1 CrossOver- 00:06:15.578 [2024-05-16 20:05:02.671393] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0000008c cdw11:49000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.578 [2024-05-16 20:05:02.671419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.578 [2024-05-16 20:05:02.721574] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0000008c cdw11:49000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.578 [2024-05-16 20:05:02.721598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.837 #61 NEW cov: 12089 ft: 15442 corp: 35/511b lim: 35 exec/s: 61 rss: 73Mb L: 10/34 MS: 2 CrossOver-ShuffleBytes- 00:06:15.837 [2024-05-16 20:05:02.761742] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00010000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.837 [2024-05-16 20:05:02.761766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.837 [2024-05-16 20:05:02.761822] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:54000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.837 [2024-05-16 20:05:02.761834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.837 #62 NEW cov: 12089 ft: 15459 corp: 36/526b lim: 35 exec/s: 62 rss: 73Mb L: 15/34 MS: 1 ChangeBinInt- 00:06:15.837 [2024-05-16 20:05:02.811713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0000008c cdw11:31490000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.837 [2024-05-16 20:05:02.811737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.837 #63 NEW cov: 12089 ft: 15468 corp: 37/537b lim: 35 exec/s: 63 rss: 73Mb L: 11/34 MS: 1 InsertByte- 00:06:15.837 [2024-05-16 20:05:02.861847] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:48000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.837 [2024-05-16 20:05:02.861871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.837 #64 NEW cov: 12089 ft: 15471 corp: 38/546b lim: 35 exec/s: 64 rss: 73Mb L: 9/34 MS: 1 PersAutoDict- DE: "\000\001"- 00:06:15.837 [2024-05-16 20:05:02.911991] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00002f00 cdw11:fe000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.837 [2024-05-16 20:05:02.912015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.837 #65 NEW cov: 12089 ft: 15481 corp: 39/559b lim: 35 exec/s: 65 rss: 73Mb L: 13/34 MS: 1 InsertByte- 00:06:15.837 [2024-05-16 20:05:02.962513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8aff0a0a cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.837 [2024-05-16 20:05:02.962538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.837 [2024-05-16 20:05:02.962595] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:faff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.837 [2024-05-16 20:05:02.962607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.837 [2024-05-16 20:05:02.962663] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffff3a cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.837 [2024-05-16 20:05:02.962691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:15.837 #66 NEW cov: 12089 ft: 15519 corp: 40/582b lim: 35 exec/s: 66 rss: 73Mb L: 23/34 MS: 1 ChangeBinInt- 00:06:16.097 [2024-05-16 20:05:03.002261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:04000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.097 [2024-05-16 20:05:03.002284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.097 #67 NEW cov: 12089 ft: 15537 corp: 41/593b lim: 35 exec/s: 67 rss: 74Mb L: 11/34 MS: 1 CMP- DE: "\377\377\377\004"- 00:06:16.097 [2024-05-16 20:05:03.052728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8aff0a0a cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.097 [2024-05-16 20:05:03.052753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.097 [2024-05-16 20:05:03.052810] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.097 [2024-05-16 20:05:03.052821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.097 [2024-05-16 20:05:03.052876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffff3a3d cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.097 [2024-05-16 20:05:03.052889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:16.097 #68 NEW cov: 12089 ft: 15542 corp: 42/615b lim: 35 exec/s: 34 rss: 74Mb L: 22/34 MS: 1 EraseBytes- 00:06:16.097 #68 DONE cov: 12089 ft: 15542 corp: 42/615b lim: 35 exec/s: 34 rss: 74Mb 00:06:16.097 ###### Recommended dictionary. ###### 00:06:16.097 "\000\001" # Uses: 3 00:06:16.097 "\327F\020\374\267\177\000\000" # Uses: 0 00:06:16.097 "\377\377\377\366" # Uses: 0 00:06:16.097 "\377\377\377\004" # Uses: 0 00:06:16.097 ###### End of recommended dictionary. ###### 00:06:16.097 Done 68 runs in 2 second(s) 00:06:16.097 [2024-05-16 20:05:03.087280] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:16.097 20:05:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_4.conf /var/tmp/suppress_nvmf_fuzz 00:06:16.097 20:05:03 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:16.097 20:05:03 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:16.097 20:05:03 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:06:16.097 20:05:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=5 00:06:16.097 20:05:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:16.097 20:05:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:16.097 20:05:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:16.097 20:05:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_5.conf 00:06:16.097 20:05:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:16.097 20:05:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:16.097 20:05:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 5 00:06:16.097 20:05:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4405 00:06:16.097 20:05:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:16.097 20:05:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' 00:06:16.097 20:05:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4405"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:16.097 20:05:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:16.097 20:05:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:16.097 20:05:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' -c /tmp/fuzz_json_5.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 -Z 5 00:06:16.357 [2024-05-16 20:05:03.256498] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:16.357 [2024-05-16 20:05:03.256578] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1665061 ] 00:06:16.357 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.357 [2024-05-16 20:05:03.438067] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.357 [2024-05-16 20:05:03.503016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.630 [2024-05-16 20:05:03.561736] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:16.630 [2024-05-16 20:05:03.577698] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:16.630 [2024-05-16 20:05:03.578052] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4405 *** 00:06:16.630 INFO: Running with entropic power schedule (0xFF, 100). 00:06:16.630 INFO: Seed: 1342753826 00:06:16.630 INFO: Loaded 1 modules (357283 inline 8-bit counters): 357283 [0x299c0cc, 0x29f346f), 00:06:16.630 INFO: Loaded 1 PC tables (357283 PCs): 357283 [0x29f3470,0x2f66ea0), 00:06:16.630 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:16.630 INFO: A corpus is not provided, starting from an empty corpus 00:06:16.630 #2 INITED exec/s: 0 rss: 64Mb 00:06:16.630 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:16.630 This may also happen if the target rejected all inputs we tried so far 00:06:16.630 [2024-05-16 20:05:03.623328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.630 [2024-05-16 20:05:03.623355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.630 NEW_FUNC[1/685]: 0x48b020 in fuzz_admin_create_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:142 00:06:16.630 NEW_FUNC[2/685]: 0x4bf420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:16.630 #4 NEW cov: 11854 ft: 11855 corp: 2/17b lim: 45 exec/s: 0 rss: 71Mb L: 16/16 MS: 2 ChangeBit-InsertRepeatedBytes- 00:06:16.630 [2024-05-16 20:05:03.763801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.630 [2024-05-16 20:05:03.763834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.630 [2024-05-16 20:05:03.763892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.630 [2024-05-16 20:05:03.763904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.889 NEW_FUNC[1/1]: 0x15f4460 in nvme_qpair_get_state /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/./nvme_internal.h:1494 00:06:16.889 #10 NEW cov: 11986 ft: 13122 corp: 3/39b lim: 45 exec/s: 0 rss: 71Mb L: 22/22 MS: 1 CrossOver- 00:06:16.889 [2024-05-16 20:05:03.823877] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00001a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.889 [2024-05-16 20:05:03.823902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.889 [2024-05-16 20:05:03.823954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.889 [2024-05-16 20:05:03.823965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.889 #13 NEW cov: 11992 ft: 13218 corp: 4/57b lim: 45 exec/s: 0 rss: 71Mb L: 18/22 MS: 3 ChangeBit-InsertByte-CrossOver- 00:06:16.889 [2024-05-16 20:05:03.863950] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00001a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.889 [2024-05-16 20:05:03.863974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.889 [2024-05-16 20:05:03.864024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:a1000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.889 [2024-05-16 20:05:03.864035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.889 #14 NEW cov: 12077 ft: 13538 corp: 5/76b lim: 45 exec/s: 0 rss: 71Mb L: 19/22 MS: 1 InsertByte- 00:06:16.889 [2024-05-16 20:05:03.913963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:13130a0a cdw11:13130000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.889 [2024-05-16 20:05:03.913987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.889 #17 NEW cov: 12077 ft: 13737 corp: 6/93b lim: 45 exec/s: 0 rss: 72Mb L: 17/22 MS: 3 CrossOver-CrossOver-InsertRepeatedBytes- 00:06:16.889 [2024-05-16 20:05:03.954068] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.889 [2024-05-16 20:05:03.954091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.889 #18 NEW cov: 12077 ft: 13857 corp: 7/109b lim: 45 exec/s: 0 rss: 72Mb L: 16/22 MS: 1 CopyPart- 00:06:16.889 [2024-05-16 20:05:03.994386] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:13130a0a cdw11:13130000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.889 [2024-05-16 20:05:03.994410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.889 [2024-05-16 20:05:03.994465] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ff131313 cdw11:13130000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.889 [2024-05-16 20:05:03.994476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.889 #19 NEW cov: 12077 ft: 13901 corp: 8/127b lim: 45 exec/s: 0 rss: 72Mb L: 18/22 MS: 1 InsertByte- 00:06:17.148 [2024-05-16 20:05:04.044487] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00001a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.148 [2024-05-16 20:05:04.044512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.148 [2024-05-16 20:05:04.044561] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.148 [2024-05-16 20:05:04.044572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.148 #20 NEW cov: 12077 ft: 13925 corp: 9/145b lim: 45 exec/s: 0 rss: 72Mb L: 18/22 MS: 1 ShuffleBytes- 00:06:17.148 [2024-05-16 20:05:04.084588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.148 [2024-05-16 20:05:04.084612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.148 [2024-05-16 20:05:04.084663] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.148 [2024-05-16 20:05:04.084675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.148 #21 NEW cov: 12077 ft: 13943 corp: 10/168b lim: 45 exec/s: 0 rss: 72Mb L: 23/23 MS: 1 CrossOver- 00:06:17.148 [2024-05-16 20:05:04.124512] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00001a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.148 [2024-05-16 20:05:04.124535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.148 #22 NEW cov: 12077 ft: 13978 corp: 11/184b lim: 45 exec/s: 0 rss: 72Mb L: 16/23 MS: 1 EraseBytes- 00:06:17.148 [2024-05-16 20:05:04.164651] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.148 [2024-05-16 20:05:04.164675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.148 #23 NEW cov: 12077 ft: 14001 corp: 12/200b lim: 45 exec/s: 0 rss: 72Mb L: 16/23 MS: 1 ChangeByte- 00:06:17.148 [2024-05-16 20:05:04.214951] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:13130a0a cdw11:13130000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.149 [2024-05-16 20:05:04.214976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.149 [2024-05-16 20:05:04.215026] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ff131313 cdw11:13130000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.149 [2024-05-16 20:05:04.215037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.149 #24 NEW cov: 12077 ft: 14023 corp: 13/218b lim: 45 exec/s: 0 rss: 72Mb L: 18/23 MS: 1 ChangeBinInt- 00:06:17.149 [2024-05-16 20:05:04.265097] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00002d1a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.149 [2024-05-16 20:05:04.265121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.149 [2024-05-16 20:05:04.265170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.149 [2024-05-16 20:05:04.265181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.408 #25 NEW cov: 12077 ft: 14040 corp: 14/237b lim: 45 exec/s: 0 rss: 72Mb L: 19/23 MS: 1 InsertByte- 00:06:17.408 [2024-05-16 20:05:04.315257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.408 [2024-05-16 20:05:04.315281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.408 [2024-05-16 20:05:04.315332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.408 [2024-05-16 20:05:04.315343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.408 #26 NEW cov: 12077 ft: 14046 corp: 15/260b lim: 45 exec/s: 0 rss: 72Mb L: 23/23 MS: 1 InsertByte- 00:06:17.408 [2024-05-16 20:05:04.365378] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.408 [2024-05-16 20:05:04.365401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.408 [2024-05-16 20:05:04.365451] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:ff000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.408 [2024-05-16 20:05:04.365470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.408 #27 NEW cov: 12077 ft: 14145 corp: 16/283b lim: 45 exec/s: 0 rss: 72Mb L: 23/23 MS: 1 ChangeBinInt- 00:06:17.408 [2024-05-16 20:05:04.415529] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.408 [2024-05-16 20:05:04.415552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.408 [2024-05-16 20:05:04.415601] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.408 [2024-05-16 20:05:04.415612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.408 #28 NEW cov: 12077 ft: 14186 corp: 17/306b lim: 45 exec/s: 0 rss: 72Mb L: 23/23 MS: 1 ShuffleBytes- 00:06:17.408 [2024-05-16 20:05:04.465815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.408 [2024-05-16 20:05:04.465839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.408 [2024-05-16 20:05:04.465890] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:85858585 cdw11:85850004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.408 [2024-05-16 20:05:04.465901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.408 [2024-05-16 20:05:04.465949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00008500 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.408 [2024-05-16 20:05:04.465961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:17.408 #29 NEW cov: 12077 ft: 14494 corp: 18/334b lim: 45 exec/s: 0 rss: 72Mb L: 28/28 MS: 1 InsertRepeatedBytes- 00:06:17.408 [2024-05-16 20:05:04.505755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.408 [2024-05-16 20:05:04.505778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.408 [2024-05-16 20:05:04.505829] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:ff000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.408 [2024-05-16 20:05:04.505841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.408 NEW_FUNC[1/1]: 0x1a6ef60 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:17.408 #30 NEW cov: 12100 ft: 14532 corp: 19/358b lim: 45 exec/s: 0 rss: 72Mb L: 24/28 MS: 1 InsertByte- 00:06:17.667 [2024-05-16 20:05:04.555769] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.667 [2024-05-16 20:05:04.555795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.667 #31 NEW cov: 12100 ft: 14568 corp: 20/374b lim: 45 exec/s: 0 rss: 73Mb L: 16/28 MS: 1 CopyPart- 00:06:17.667 [2024-05-16 20:05:04.606058] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.667 [2024-05-16 20:05:04.606084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.667 [2024-05-16 20:05:04.606135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.667 [2024-05-16 20:05:04.606149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.667 #32 NEW cov: 12100 ft: 14579 corp: 21/397b lim: 45 exec/s: 32 rss: 73Mb L: 23/28 MS: 1 ChangeBit- 00:06:17.667 [2024-05-16 20:05:04.656050] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.667 [2024-05-16 20:05:04.656074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.667 #33 NEW cov: 12100 ft: 14611 corp: 22/413b lim: 45 exec/s: 33 rss: 73Mb L: 16/28 MS: 1 CrossOver- 00:06:17.667 [2024-05-16 20:05:04.696322] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00001a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.667 [2024-05-16 20:05:04.696348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.667 [2024-05-16 20:05:04.696398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:a1000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.667 [2024-05-16 20:05:04.696409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.667 #34 NEW cov: 12100 ft: 14615 corp: 23/432b lim: 45 exec/s: 34 rss: 73Mb L: 19/28 MS: 1 ChangeBit- 00:06:17.667 [2024-05-16 20:05:04.746325] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:000000df cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.667 [2024-05-16 20:05:04.746353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.667 #35 NEW cov: 12100 ft: 14632 corp: 24/449b lim: 45 exec/s: 35 rss: 73Mb L: 17/28 MS: 1 InsertByte- 00:06:17.667 [2024-05-16 20:05:04.786403] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:13130a0a cdw11:13130000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.667 [2024-05-16 20:05:04.786427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.667 #36 NEW cov: 12100 ft: 14682 corp: 25/466b lim: 45 exec/s: 36 rss: 73Mb L: 17/28 MS: 1 ChangeBinInt- 00:06:17.927 [2024-05-16 20:05:04.826523] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.927 [2024-05-16 20:05:04.826546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.927 #37 NEW cov: 12100 ft: 14724 corp: 26/482b lim: 45 exec/s: 37 rss: 73Mb L: 16/28 MS: 1 ChangeBinInt- 00:06:17.927 [2024-05-16 20:05:04.876807] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00001a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.927 [2024-05-16 20:05:04.876830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.927 [2024-05-16 20:05:04.876881] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:a12e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.927 [2024-05-16 20:05:04.876892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.927 #38 NEW cov: 12100 ft: 14736 corp: 27/502b lim: 45 exec/s: 38 rss: 73Mb L: 20/28 MS: 1 InsertByte- 00:06:17.927 [2024-05-16 20:05:04.927143] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:000000df cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.927 [2024-05-16 20:05:04.927166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.927 [2024-05-16 20:05:04.927217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:28282828 cdw11:28280001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.927 [2024-05-16 20:05:04.927233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.927 [2024-05-16 20:05:04.927280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:28282828 cdw11:28000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.927 [2024-05-16 20:05:04.927291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:17.927 #39 NEW cov: 12100 ft: 14742 corp: 28/533b lim: 45 exec/s: 39 rss: 73Mb L: 31/31 MS: 1 InsertRepeatedBytes- 00:06:17.927 [2024-05-16 20:05:04.976969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.927 [2024-05-16 20:05:04.976992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.927 #40 NEW cov: 12100 ft: 14755 corp: 29/549b lim: 45 exec/s: 40 rss: 73Mb L: 16/31 MS: 1 ChangeBinInt- 00:06:17.927 [2024-05-16 20:05:05.017036] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:13130a0a cdw11:13130000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.927 [2024-05-16 20:05:05.017058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.927 #41 NEW cov: 12100 ft: 14823 corp: 30/566b lim: 45 exec/s: 41 rss: 74Mb L: 17/31 MS: 1 ChangeBit- 00:06:17.927 [2024-05-16 20:05:05.067378] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:13130a0a cdw11:13130000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.927 [2024-05-16 20:05:05.067400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.927 [2024-05-16 20:05:05.067450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ff131313 cdw11:13130000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.927 [2024-05-16 20:05:05.067466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.186 #42 NEW cov: 12100 ft: 14847 corp: 31/584b lim: 45 exec/s: 42 rss: 74Mb L: 18/31 MS: 1 ChangeByte- 00:06:18.186 [2024-05-16 20:05:05.107304] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.186 [2024-05-16 20:05:05.107326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.186 #43 NEW cov: 12100 ft: 14881 corp: 32/600b lim: 45 exec/s: 43 rss: 74Mb L: 16/31 MS: 1 ShuffleBytes- 00:06:18.186 [2024-05-16 20:05:05.147580] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:000000df cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.186 [2024-05-16 20:05:05.147602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.186 [2024-05-16 20:05:05.147651] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000043 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.186 [2024-05-16 20:05:05.147663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.186 #44 NEW cov: 12100 ft: 14897 corp: 33/618b lim: 45 exec/s: 44 rss: 74Mb L: 18/31 MS: 1 InsertByte- 00:06:18.186 [2024-05-16 20:05:05.187494] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.186 [2024-05-16 20:05:05.187516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.186 #45 NEW cov: 12100 ft: 14911 corp: 34/635b lim: 45 exec/s: 45 rss: 74Mb L: 17/31 MS: 1 InsertByte- 00:06:18.186 [2024-05-16 20:05:05.227626] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:10000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.186 [2024-05-16 20:05:05.227649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.186 #46 NEW cov: 12100 ft: 14919 corp: 35/651b lim: 45 exec/s: 46 rss: 74Mb L: 16/31 MS: 1 ChangeBit- 00:06:18.186 [2024-05-16 20:05:05.267729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.186 [2024-05-16 20:05:05.267752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.186 #47 NEW cov: 12100 ft: 14936 corp: 36/663b lim: 45 exec/s: 47 rss: 74Mb L: 12/31 MS: 1 CrossOver- 00:06:18.186 [2024-05-16 20:05:05.318189] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00001a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.186 [2024-05-16 20:05:05.318212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.186 [2024-05-16 20:05:05.318262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:a1000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.186 [2024-05-16 20:05:05.318274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.186 [2024-05-16 20:05:05.318323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.186 [2024-05-16 20:05:05.318334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:18.446 #48 NEW cov: 12100 ft: 14972 corp: 37/697b lim: 45 exec/s: 48 rss: 74Mb L: 34/34 MS: 1 CopyPart- 00:06:18.446 [2024-05-16 20:05:05.358316] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00001a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.446 [2024-05-16 20:05:05.358338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.446 [2024-05-16 20:05:05.358391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00003d00 cdw11:a1000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.446 [2024-05-16 20:05:05.358402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.446 [2024-05-16 20:05:05.358451] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.446 [2024-05-16 20:05:05.358468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:18.446 [2024-05-16 20:05:05.408620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00001a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.446 [2024-05-16 20:05:05.408642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.446 [2024-05-16 20:05:05.408690] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00003d00 cdw11:a1000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.446 [2024-05-16 20:05:05.408702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.446 [2024-05-16 20:05:05.408751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:a1000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.446 [2024-05-16 20:05:05.408761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:18.446 [2024-05-16 20:05:05.408814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.446 [2024-05-16 20:05:05.408825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:18.446 #50 NEW cov: 12100 ft: 15287 corp: 38/738b lim: 45 exec/s: 50 rss: 74Mb L: 41/41 MS: 2 ChangeByte-CopyPart- 00:06:18.446 [2024-05-16 20:05:05.448258] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.446 [2024-05-16 20:05:05.448280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.446 #51 NEW cov: 12100 ft: 15322 corp: 39/754b lim: 45 exec/s: 51 rss: 74Mb L: 16/41 MS: 1 ShuffleBytes- 00:06:18.446 [2024-05-16 20:05:05.498709] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:000000df cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.446 [2024-05-16 20:05:05.498731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.446 [2024-05-16 20:05:05.498781] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:28282828 cdw11:28280001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.446 [2024-05-16 20:05:05.498792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.446 [2024-05-16 20:05:05.498840] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:28282820 cdw11:28000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.446 [2024-05-16 20:05:05.498851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:18.446 #52 NEW cov: 12100 ft: 15332 corp: 40/785b lim: 45 exec/s: 52 rss: 74Mb L: 31/41 MS: 1 ChangeBinInt- 00:06:18.446 [2024-05-16 20:05:05.548539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.446 [2024-05-16 20:05:05.548561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.446 #53 NEW cov: 12100 ft: 15357 corp: 41/801b lim: 45 exec/s: 53 rss: 74Mb L: 16/41 MS: 1 ChangeByte- 00:06:18.705 [2024-05-16 20:05:05.598664] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00ab0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.705 [2024-05-16 20:05:05.598687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.705 #54 NEW cov: 12100 ft: 15370 corp: 42/817b lim: 45 exec/s: 27 rss: 74Mb L: 16/41 MS: 1 ChangeByte- 00:06:18.705 #54 DONE cov: 12100 ft: 15370 corp: 42/817b lim: 45 exec/s: 27 rss: 74Mb 00:06:18.705 Done 54 runs in 2 second(s) 00:06:18.705 [2024-05-16 20:05:05.617845] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:18.706 20:05:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_5.conf /var/tmp/suppress_nvmf_fuzz 00:06:18.706 20:05:05 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:18.706 20:05:05 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:18.706 20:05:05 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:06:18.706 20:05:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=6 00:06:18.706 20:05:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:18.706 20:05:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:18.706 20:05:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:06:18.706 20:05:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_6.conf 00:06:18.706 20:05:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:18.706 20:05:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:18.706 20:05:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 6 00:06:18.706 20:05:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4406 00:06:18.706 20:05:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:06:18.706 20:05:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' 00:06:18.706 20:05:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4406"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:18.706 20:05:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:18.706 20:05:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:18.706 20:05:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' -c /tmp/fuzz_json_6.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 -Z 6 00:06:18.706 [2024-05-16 20:05:05.785368] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:18.706 [2024-05-16 20:05:05.785428] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1665466 ] 00:06:18.706 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.965 [2024-05-16 20:05:05.968994] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.965 [2024-05-16 20:05:06.034054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.965 [2024-05-16 20:05:06.092790] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:18.965 [2024-05-16 20:05:06.108750] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:18.965 [2024-05-16 20:05:06.109100] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4406 *** 00:06:19.225 INFO: Running with entropic power schedule (0xFF, 100). 00:06:19.225 INFO: Seed: 3873741713 00:06:19.225 INFO: Loaded 1 modules (357283 inline 8-bit counters): 357283 [0x299c0cc, 0x29f346f), 00:06:19.225 INFO: Loaded 1 PC tables (357283 PCs): 357283 [0x29f3470,0x2f66ea0), 00:06:19.225 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:06:19.225 INFO: A corpus is not provided, starting from an empty corpus 00:06:19.225 #2 INITED exec/s: 0 rss: 64Mb 00:06:19.225 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:19.225 This may also happen if the target rejected all inputs we tried so far 00:06:19.225 [2024-05-16 20:05:06.154277] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00004d4d cdw11:00000000 00:06:19.225 [2024-05-16 20:05:06.154304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.225 NEW_FUNC[1/678]: 0x48d830 in fuzz_admin_delete_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:161 00:06:19.225 NEW_FUNC[2/678]: 0x4bf420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:19.225 #6 NEW cov: 11722 ft: 11730 corp: 2/3b lim: 10 exec/s: 0 rss: 71Mb L: 2/2 MS: 4 ShuffleBytes-ChangeByte-ChangeByte-CopyPart- 00:06:19.225 [2024-05-16 20:05:06.304795] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000eaea cdw11:00000000 00:06:19.225 [2024-05-16 20:05:06.304825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.225 [2024-05-16 20:05:06.304876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000eaea cdw11:00000000 00:06:19.225 [2024-05-16 20:05:06.304887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.225 NEW_FUNC[1/6]: 0x4ca810 in malloc_completion_poller /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/module/bdev/malloc/bdev_malloc.c:870 00:06:19.225 NEW_FUNC[2/6]: 0x17979f0 in spdk_nvme_qpair_process_completions /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:757 00:06:19.225 #7 NEW cov: 11903 ft: 12573 corp: 3/8b lim: 10 exec/s: 0 rss: 71Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:06:19.225 [2024-05-16 20:05:06.345043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00003a00 cdw11:00000000 00:06:19.225 [2024-05-16 20:05:06.345065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.225 [2024-05-16 20:05:06.345113] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:19.225 [2024-05-16 20:05:06.345124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.225 [2024-05-16 20:05:06.345185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:19.225 [2024-05-16 20:05:06.345196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:19.225 [2024-05-16 20:05:06.345243] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:19.225 [2024-05-16 20:05:06.345253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:19.484 #12 NEW cov: 11909 ft: 13112 corp: 4/17b lim: 10 exec/s: 0 rss: 71Mb L: 9/9 MS: 5 EraseBytes-ShuffleBytes-ShuffleBytes-ChangeByte-InsertRepeatedBytes- 00:06:19.484 [2024-05-16 20:05:06.394846] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00004c4d cdw11:00000000 00:06:19.484 [2024-05-16 20:05:06.394868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.484 #13 NEW cov: 11994 ft: 13359 corp: 5/19b lim: 10 exec/s: 0 rss: 71Mb L: 2/9 MS: 1 ChangeBit- 00:06:19.484 [2024-05-16 20:05:06.434992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000000c cdw11:00000000 00:06:19.484 [2024-05-16 20:05:06.435014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.484 #14 NEW cov: 11994 ft: 13457 corp: 6/22b lim: 10 exec/s: 0 rss: 71Mb L: 3/9 MS: 1 CMP- DE: "\000\014"- 00:06:19.485 [2024-05-16 20:05:06.475448] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000039ff cdw11:00000000 00:06:19.485 [2024-05-16 20:05:06.475472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.485 [2024-05-16 20:05:06.475537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000fffa cdw11:00000000 00:06:19.485 [2024-05-16 20:05:06.475549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.485 [2024-05-16 20:05:06.475593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:19.485 [2024-05-16 20:05:06.475603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:19.485 [2024-05-16 20:05:06.475650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:19.485 [2024-05-16 20:05:06.475660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:19.485 #15 NEW cov: 11994 ft: 13574 corp: 7/31b lim: 10 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 ChangeBinInt- 00:06:19.485 [2024-05-16 20:05:06.525593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000000c cdw11:00000000 00:06:19.485 [2024-05-16 20:05:06.525614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.485 [2024-05-16 20:05:06.525663] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000fffa cdw11:00000000 00:06:19.485 [2024-05-16 20:05:06.525674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.485 [2024-05-16 20:05:06.525719] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:19.485 [2024-05-16 20:05:06.525729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:19.485 [2024-05-16 20:05:06.525775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:19.485 [2024-05-16 20:05:06.525785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:19.485 #16 NEW cov: 11994 ft: 13653 corp: 8/40b lim: 10 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 PersAutoDict- DE: "\000\014"- 00:06:19.485 [2024-05-16 20:05:06.575873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000000c cdw11:00000000 00:06:19.485 [2024-05-16 20:05:06.575895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.485 [2024-05-16 20:05:06.575945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000fffa cdw11:00000000 00:06:19.485 [2024-05-16 20:05:06.575956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.485 [2024-05-16 20:05:06.576003] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:19.485 [2024-05-16 20:05:06.576013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:19.485 [2024-05-16 20:05:06.576058] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:19.485 [2024-05-16 20:05:06.576069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:19.485 [2024-05-16 20:05:06.576116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 00:06:19.485 [2024-05-16 20:05:06.576126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:19.485 #17 NEW cov: 11994 ft: 13713 corp: 9/50b lim: 10 exec/s: 0 rss: 72Mb L: 10/10 MS: 1 CrossOver- 00:06:19.485 [2024-05-16 20:05:06.626006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:19.485 [2024-05-16 20:05:06.626027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.485 [2024-05-16 20:05:06.626077] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000cff cdw11:00000000 00:06:19.485 [2024-05-16 20:05:06.626088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.485 [2024-05-16 20:05:06.626135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000fa00 cdw11:00000000 00:06:19.485 [2024-05-16 20:05:06.626146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:19.485 [2024-05-16 20:05:06.626194] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:19.485 [2024-05-16 20:05:06.626207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:19.485 [2024-05-16 20:05:06.626254] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 00:06:19.485 [2024-05-16 20:05:06.626264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:19.745 #18 NEW cov: 11994 ft: 13744 corp: 10/60b lim: 10 exec/s: 0 rss: 72Mb L: 10/10 MS: 1 CrossOver- 00:06:19.745 [2024-05-16 20:05:06.665617] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00004c4d cdw11:00000000 00:06:19.745 [2024-05-16 20:05:06.665639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.745 #19 NEW cov: 11994 ft: 13795 corp: 11/62b lim: 10 exec/s: 0 rss: 72Mb L: 2/10 MS: 1 CopyPart- 00:06:19.745 [2024-05-16 20:05:06.716099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00003a00 cdw11:00000000 00:06:19.745 [2024-05-16 20:05:06.716120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.745 [2024-05-16 20:05:06.716185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 00:06:19.745 [2024-05-16 20:05:06.716196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.745 [2024-05-16 20:05:06.716244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:19.745 [2024-05-16 20:05:06.716255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:19.745 [2024-05-16 20:05:06.716302] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:19.745 [2024-05-16 20:05:06.716312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:19.745 #20 NEW cov: 11994 ft: 13858 corp: 12/71b lim: 10 exec/s: 0 rss: 72Mb L: 9/10 MS: 1 ChangeBit- 00:06:19.745 [2024-05-16 20:05:06.756366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 00:06:19.745 [2024-05-16 20:05:06.756387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.745 [2024-05-16 20:05:06.756451] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000fffa cdw11:00000000 00:06:19.745 [2024-05-16 20:05:06.756466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.745 [2024-05-16 20:05:06.756513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:19.745 [2024-05-16 20:05:06.756523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:19.745 [2024-05-16 20:05:06.756569] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:19.745 [2024-05-16 20:05:06.756579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:19.745 [2024-05-16 20:05:06.756624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 00:06:19.745 [2024-05-16 20:05:06.756634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:19.745 #21 NEW cov: 11994 ft: 13882 corp: 13/81b lim: 10 exec/s: 0 rss: 72Mb L: 10/10 MS: 1 ChangeBit- 00:06:19.745 [2024-05-16 20:05:06.806399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00003a08 cdw11:00000000 00:06:19.745 [2024-05-16 20:05:06.806423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.745 [2024-05-16 20:05:06.806489] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:19.745 [2024-05-16 20:05:06.806500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.745 [2024-05-16 20:05:06.806550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:19.745 [2024-05-16 20:05:06.806560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:19.745 [2024-05-16 20:05:06.806606] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:19.745 [2024-05-16 20:05:06.806616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:19.745 #22 NEW cov: 11994 ft: 13907 corp: 14/90b lim: 10 exec/s: 0 rss: 72Mb L: 9/10 MS: 1 ShuffleBytes- 00:06:19.745 [2024-05-16 20:05:06.856187] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000574d cdw11:00000000 00:06:19.745 [2024-05-16 20:05:06.856209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.745 #23 NEW cov: 11994 ft: 13938 corp: 15/92b lim: 10 exec/s: 0 rss: 72Mb L: 2/10 MS: 1 ChangeByte- 00:06:20.004 [2024-05-16 20:05:06.896297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000000c cdw11:00000000 00:06:20.004 [2024-05-16 20:05:06.896319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.004 #24 NEW cov: 11994 ft: 14057 corp: 16/95b lim: 10 exec/s: 0 rss: 72Mb L: 3/10 MS: 1 CrossOver- 00:06:20.004 [2024-05-16 20:05:06.946414] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000000c cdw11:00000000 00:06:20.004 [2024-05-16 20:05:06.946435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.004 #25 NEW cov: 11994 ft: 14092 corp: 17/97b lim: 10 exec/s: 0 rss: 72Mb L: 2/10 MS: 1 PersAutoDict- DE: "\000\014"- 00:06:20.004 [2024-05-16 20:05:06.996548] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000484d cdw11:00000000 00:06:20.004 [2024-05-16 20:05:06.996569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.004 #26 NEW cov: 11994 ft: 14113 corp: 18/99b lim: 10 exec/s: 0 rss: 72Mb L: 2/10 MS: 1 ChangeBit- 00:06:20.004 [2024-05-16 20:05:07.037034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00003a00 cdw11:00000000 00:06:20.004 [2024-05-16 20:05:07.037055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.004 [2024-05-16 20:05:07.037119] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000000c cdw11:00000000 00:06:20.005 [2024-05-16 20:05:07.037130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.005 [2024-05-16 20:05:07.037177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:20.005 [2024-05-16 20:05:07.037188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.005 [2024-05-16 20:05:07.037234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:20.005 [2024-05-16 20:05:07.037245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:20.005 NEW_FUNC[1/1]: 0x1a6ef60 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:20.005 #27 NEW cov: 12017 ft: 14148 corp: 19/108b lim: 10 exec/s: 0 rss: 72Mb L: 9/10 MS: 1 PersAutoDict- DE: "\000\014"- 00:06:20.005 [2024-05-16 20:05:07.077164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00003a00 cdw11:00000000 00:06:20.005 [2024-05-16 20:05:07.077186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.005 [2024-05-16 20:05:07.077235] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000000c cdw11:00000000 00:06:20.005 [2024-05-16 20:05:07.077246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.005 [2024-05-16 20:05:07.077292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:20.005 [2024-05-16 20:05:07.077302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.005 [2024-05-16 20:05:07.077351] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:20.005 [2024-05-16 20:05:07.077362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:20.005 #28 NEW cov: 12017 ft: 14185 corp: 20/117b lim: 10 exec/s: 0 rss: 72Mb L: 9/10 MS: 1 CopyPart- 00:06:20.005 [2024-05-16 20:05:07.126926] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:20.005 [2024-05-16 20:05:07.126947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.264 #29 NEW cov: 12017 ft: 14200 corp: 21/119b lim: 10 exec/s: 29 rss: 72Mb L: 2/10 MS: 1 CopyPart- 00:06:20.264 [2024-05-16 20:05:07.177093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000000c cdw11:00000000 00:06:20.264 [2024-05-16 20:05:07.177115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.264 #30 NEW cov: 12017 ft: 14223 corp: 22/121b lim: 10 exec/s: 30 rss: 72Mb L: 2/10 MS: 1 PersAutoDict- DE: "\000\014"- 00:06:20.264 [2024-05-16 20:05:07.217243] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00005745 cdw11:00000000 00:06:20.264 [2024-05-16 20:05:07.217264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.264 #31 NEW cov: 12017 ft: 14231 corp: 23/123b lim: 10 exec/s: 31 rss: 72Mb L: 2/10 MS: 1 ChangeBit- 00:06:20.264 [2024-05-16 20:05:07.267563] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000000c cdw11:00000000 00:06:20.264 [2024-05-16 20:05:07.267584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.264 [2024-05-16 20:05:07.267649] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000000c cdw11:00000000 00:06:20.264 [2024-05-16 20:05:07.267661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.264 [2024-05-16 20:05:07.267710] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00004c4c cdw11:00000000 00:06:20.264 [2024-05-16 20:05:07.267720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.264 #32 NEW cov: 12017 ft: 14392 corp: 24/129b lim: 10 exec/s: 32 rss: 72Mb L: 6/10 MS: 1 CopyPart- 00:06:20.264 [2024-05-16 20:05:07.317873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000000c cdw11:00000000 00:06:20.264 [2024-05-16 20:05:07.317897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.264 [2024-05-16 20:05:07.317948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000fffa cdw11:00000000 00:06:20.264 [2024-05-16 20:05:07.317960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.264 [2024-05-16 20:05:07.318007] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:20.264 [2024-05-16 20:05:07.318017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.264 [2024-05-16 20:05:07.318064] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000025 cdw11:00000000 00:06:20.264 [2024-05-16 20:05:07.318075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:20.264 #33 NEW cov: 12017 ft: 14420 corp: 25/138b lim: 10 exec/s: 33 rss: 73Mb L: 9/10 MS: 1 ChangeByte- 00:06:20.264 [2024-05-16 20:05:07.357597] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 00:06:20.264 [2024-05-16 20:05:07.357620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.264 #34 NEW cov: 12017 ft: 14449 corp: 26/141b lim: 10 exec/s: 34 rss: 73Mb L: 3/10 MS: 1 ChangeBit- 00:06:20.264 [2024-05-16 20:05:07.397835] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00009f00 cdw11:00000000 00:06:20.264 [2024-05-16 20:05:07.397857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.264 [2024-05-16 20:05:07.397904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000c4c cdw11:00000000 00:06:20.264 [2024-05-16 20:05:07.397914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.524 #35 NEW cov: 12017 ft: 14453 corp: 27/145b lim: 10 exec/s: 35 rss: 73Mb L: 4/10 MS: 1 InsertByte- 00:06:20.524 [2024-05-16 20:05:07.438329] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:20.524 [2024-05-16 20:05:07.438352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.524 [2024-05-16 20:05:07.438417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:20.524 [2024-05-16 20:05:07.438428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.524 [2024-05-16 20:05:07.438479] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:20.524 [2024-05-16 20:05:07.438491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.524 [2024-05-16 20:05:07.438537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:20.524 [2024-05-16 20:05:07.438547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:20.524 [2024-05-16 20:05:07.438594] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 00:06:20.524 [2024-05-16 20:05:07.438604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:20.524 #36 NEW cov: 12017 ft: 14524 corp: 28/155b lim: 10 exec/s: 36 rss: 73Mb L: 10/10 MS: 1 ChangeBinInt- 00:06:20.524 [2024-05-16 20:05:07.498428] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 00:06:20.524 [2024-05-16 20:05:07.498452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.524 [2024-05-16 20:05:07.498511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:20.524 [2024-05-16 20:05:07.498522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.524 [2024-05-16 20:05:07.498569] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00003a00 cdw11:00000000 00:06:20.524 [2024-05-16 20:05:07.498580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.524 [2024-05-16 20:05:07.498626] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:20.524 [2024-05-16 20:05:07.498637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:20.524 #37 NEW cov: 12017 ft: 14550 corp: 29/164b lim: 10 exec/s: 37 rss: 73Mb L: 9/10 MS: 1 ShuffleBytes- 00:06:20.525 [2024-05-16 20:05:07.538626] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000000c cdw11:00000000 00:06:20.525 [2024-05-16 20:05:07.538649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.525 [2024-05-16 20:05:07.538714] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ff4d cdw11:00000000 00:06:20.525 [2024-05-16 20:05:07.538725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.525 [2024-05-16 20:05:07.538772] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:20.525 [2024-05-16 20:05:07.538782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.525 [2024-05-16 20:05:07.538828] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:20.525 [2024-05-16 20:05:07.538838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:20.525 [2024-05-16 20:05:07.538882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 00:06:20.525 [2024-05-16 20:05:07.538893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:20.525 #38 NEW cov: 12017 ft: 14562 corp: 30/174b lim: 10 exec/s: 38 rss: 73Mb L: 10/10 MS: 1 CrossOver- 00:06:20.525 [2024-05-16 20:05:07.578294] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000070c cdw11:00000000 00:06:20.525 [2024-05-16 20:05:07.578317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.525 #39 NEW cov: 12017 ft: 14586 corp: 31/176b lim: 10 exec/s: 39 rss: 73Mb L: 2/10 MS: 1 ChangeByte- 00:06:20.525 [2024-05-16 20:05:07.628813] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00003a00 cdw11:00000000 00:06:20.525 [2024-05-16 20:05:07.628836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.525 [2024-05-16 20:05:07.628901] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000c00 cdw11:00000000 00:06:20.525 [2024-05-16 20:05:07.628912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.525 [2024-05-16 20:05:07.628958] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:20.525 [2024-05-16 20:05:07.628969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.525 [2024-05-16 20:05:07.629019] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:20.525 [2024-05-16 20:05:07.629030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:20.525 #40 NEW cov: 12017 ft: 14593 corp: 32/185b lim: 10 exec/s: 40 rss: 73Mb L: 9/10 MS: 1 PersAutoDict- DE: "\000\014"- 00:06:20.525 [2024-05-16 20:05:07.668923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 00:06:20.525 [2024-05-16 20:05:07.668946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.525 [2024-05-16 20:05:07.668993] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:20.525 [2024-05-16 20:05:07.669004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.525 [2024-05-16 20:05:07.669050] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00003a00 cdw11:00000000 00:06:20.525 [2024-05-16 20:05:07.669060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.525 [2024-05-16 20:05:07.669105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:20.525 [2024-05-16 20:05:07.669116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:20.784 #41 NEW cov: 12017 ft: 14610 corp: 33/194b lim: 10 exec/s: 41 rss: 73Mb L: 9/10 MS: 1 ChangeBinInt- 00:06:20.784 [2024-05-16 20:05:07.718829] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000004d cdw11:00000000 00:06:20.784 [2024-05-16 20:05:07.718851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.784 [2024-05-16 20:05:07.718915] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000c00 cdw11:00000000 00:06:20.784 [2024-05-16 20:05:07.718926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.784 #42 NEW cov: 12017 ft: 14633 corp: 34/198b lim: 10 exec/s: 42 rss: 73Mb L: 4/10 MS: 1 CrossOver- 00:06:20.784 [2024-05-16 20:05:07.769290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00003a00 cdw11:00000000 00:06:20.784 [2024-05-16 20:05:07.769311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.784 [2024-05-16 20:05:07.769358] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000000c cdw11:00000000 00:06:20.784 [2024-05-16 20:05:07.769368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.784 [2024-05-16 20:05:07.769416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:20.784 [2024-05-16 20:05:07.769426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.784 [2024-05-16 20:05:07.769474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:20.784 [2024-05-16 20:05:07.769485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:20.784 [2024-05-16 20:05:07.769529] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 00:06:20.784 [2024-05-16 20:05:07.769540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:20.784 #43 NEW cov: 12017 ft: 14634 corp: 35/208b lim: 10 exec/s: 43 rss: 73Mb L: 10/10 MS: 1 CopyPart- 00:06:20.784 [2024-05-16 20:05:07.818998] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:20.784 [2024-05-16 20:05:07.819019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.784 #44 NEW cov: 12017 ft: 14637 corp: 36/210b lim: 10 exec/s: 44 rss: 74Mb L: 2/10 MS: 1 CMP- DE: "\000\000"- 00:06:20.784 [2024-05-16 20:05:07.859609] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000000c cdw11:00000000 00:06:20.784 [2024-05-16 20:05:07.859631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.784 [2024-05-16 20:05:07.859678] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000fffa cdw11:00000000 00:06:20.784 [2024-05-16 20:05:07.859689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.785 [2024-05-16 20:05:07.859733] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:000000ff cdw11:00000000 00:06:20.785 [2024-05-16 20:05:07.859743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.785 [2024-05-16 20:05:07.859788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:20.785 [2024-05-16 20:05:07.859798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:20.785 [2024-05-16 20:05:07.859843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 00:06:20.785 [2024-05-16 20:05:07.859853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:20.785 #45 NEW cov: 12017 ft: 14645 corp: 37/220b lim: 10 exec/s: 45 rss: 74Mb L: 10/10 MS: 1 CopyPart- 00:06:20.785 [2024-05-16 20:05:07.899555] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00008008 cdw11:00000000 00:06:20.785 [2024-05-16 20:05:07.899577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.785 [2024-05-16 20:05:07.899651] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:20.785 [2024-05-16 20:05:07.899661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.785 [2024-05-16 20:05:07.899709] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00003a00 cdw11:00000000 00:06:20.785 [2024-05-16 20:05:07.899719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.785 [2024-05-16 20:05:07.899766] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:20.785 [2024-05-16 20:05:07.899776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:20.785 #46 NEW cov: 12017 ft: 14659 corp: 38/229b lim: 10 exec/s: 46 rss: 74Mb L: 9/10 MS: 1 ChangeBit- 00:06:21.045 [2024-05-16 20:05:07.939742] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 00:06:21.045 [2024-05-16 20:05:07.939764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.045 [2024-05-16 20:05:07.939827] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:21.045 [2024-05-16 20:05:07.939839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.045 [2024-05-16 20:05:07.939886] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:21.045 [2024-05-16 20:05:07.939902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:21.045 [2024-05-16 20:05:07.939950] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00003a00 cdw11:00000000 00:06:21.045 [2024-05-16 20:05:07.939960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:21.045 #47 NEW cov: 12017 ft: 14679 corp: 39/238b lim: 10 exec/s: 47 rss: 74Mb L: 9/10 MS: 1 ShuffleBytes- 00:06:21.045 [2024-05-16 20:05:07.989968] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00008008 cdw11:00000000 00:06:21.045 [2024-05-16 20:05:07.989990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.045 [2024-05-16 20:05:07.990051] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:21.045 [2024-05-16 20:05:07.990063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.045 [2024-05-16 20:05:07.990111] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00003a00 cdw11:00000000 00:06:21.045 [2024-05-16 20:05:07.990122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:21.045 [2024-05-16 20:05:07.990170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000000c cdw11:00000000 00:06:21.045 [2024-05-16 20:05:07.990181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:21.045 [2024-05-16 20:05:07.990227] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 00:06:21.045 [2024-05-16 20:05:07.990238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:21.045 #48 NEW cov: 12017 ft: 14680 corp: 40/248b lim: 10 exec/s: 48 rss: 74Mb L: 10/10 MS: 1 CrossOver- 00:06:21.045 [2024-05-16 20:05:08.039972] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000000c cdw11:00000000 00:06:21.045 [2024-05-16 20:05:08.039993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.045 [2024-05-16 20:05:08.040056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000fa00 cdw11:00000000 00:06:21.045 [2024-05-16 20:05:08.040067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.045 [2024-05-16 20:05:08.040115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:000000ff cdw11:00000000 00:06:21.045 [2024-05-16 20:05:08.040126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:21.045 [2024-05-16 20:05:08.040174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000025 cdw11:00000000 00:06:21.045 [2024-05-16 20:05:08.040184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:21.045 #49 NEW cov: 12017 ft: 14701 corp: 41/257b lim: 10 exec/s: 49 rss: 74Mb L: 9/10 MS: 1 ShuffleBytes- 00:06:21.045 [2024-05-16 20:05:08.090084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:06:21.045 [2024-05-16 20:05:08.090105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.045 [2024-05-16 20:05:08.090169] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000fa00 cdw11:00000000 00:06:21.045 [2024-05-16 20:05:08.090183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.045 [2024-05-16 20:05:08.090228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:21.045 [2024-05-16 20:05:08.090238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:21.045 [2024-05-16 20:05:08.090284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:21.045 [2024-05-16 20:05:08.090294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:21.045 #50 NEW cov: 12017 ft: 14705 corp: 42/265b lim: 10 exec/s: 50 rss: 74Mb L: 8/10 MS: 1 EraseBytes- 00:06:21.045 [2024-05-16 20:05:08.130350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00008008 cdw11:00000000 00:06:21.045 [2024-05-16 20:05:08.130371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.045 [2024-05-16 20:05:08.130418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:21.045 [2024-05-16 20:05:08.130428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.045 [2024-05-16 20:05:08.130482] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00003a00 cdw11:00000000 00:06:21.045 [2024-05-16 20:05:08.130492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:21.045 [2024-05-16 20:05:08.130539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000000c cdw11:00000000 00:06:21.045 [2024-05-16 20:05:08.130549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:21.045 [2024-05-16 20:05:08.130594] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000000c cdw11:00000000 00:06:21.045 [2024-05-16 20:05:08.130604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:21.045 [2024-05-16 20:05:08.180500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00008008 cdw11:00000000 00:06:21.045 [2024-05-16 20:05:08.180521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.045 [2024-05-16 20:05:08.180584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:21.045 [2024-05-16 20:05:08.180595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.045 [2024-05-16 20:05:08.180653] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00003a00 cdw11:00000000 00:06:21.045 [2024-05-16 20:05:08.180663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:21.045 [2024-05-16 20:05:08.180710] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000000c cdw11:00000000 00:06:21.045 [2024-05-16 20:05:08.180720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:21.045 [2024-05-16 20:05:08.180766] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000008c cdw11:00000000 00:06:21.045 [2024-05-16 20:05:08.180775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:21.305 #52 NEW cov: 12017 ft: 14717 corp: 43/275b lim: 10 exec/s: 26 rss: 74Mb L: 10/10 MS: 2 CopyPart-ChangeBit- 00:06:21.305 #52 DONE cov: 12017 ft: 14717 corp: 43/275b lim: 10 exec/s: 26 rss: 74Mb 00:06:21.305 ###### Recommended dictionary. ###### 00:06:21.305 "\000\014" # Uses: 5 00:06:21.305 "\000\000" # Uses: 0 00:06:21.305 ###### End of recommended dictionary. ###### 00:06:21.305 Done 52 runs in 2 second(s) 00:06:21.305 [2024-05-16 20:05:08.201697] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:21.305 20:05:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_6.conf /var/tmp/suppress_nvmf_fuzz 00:06:21.305 20:05:08 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:21.305 20:05:08 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:21.305 20:05:08 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 7 1 0x1 00:06:21.305 20:05:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=7 00:06:21.305 20:05:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:21.305 20:05:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:21.305 20:05:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:06:21.305 20:05:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_7.conf 00:06:21.305 20:05:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:21.305 20:05:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:21.305 20:05:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 7 00:06:21.305 20:05:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4407 00:06:21.305 20:05:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:06:21.305 20:05:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' 00:06:21.305 20:05:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4407"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:21.305 20:05:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:21.305 20:05:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:21.305 20:05:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' -c /tmp/fuzz_json_7.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 -Z 7 00:06:21.305 [2024-05-16 20:05:08.369603] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:21.305 [2024-05-16 20:05:08.369683] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1665907 ] 00:06:21.305 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.565 [2024-05-16 20:05:08.551796] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.565 [2024-05-16 20:05:08.616848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.565 [2024-05-16 20:05:08.675474] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:21.565 [2024-05-16 20:05:08.691427] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:21.565 [2024-05-16 20:05:08.691767] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4407 *** 00:06:21.565 INFO: Running with entropic power schedule (0xFF, 100). 00:06:21.565 INFO: Seed: 2160777864 00:06:21.824 INFO: Loaded 1 modules (357283 inline 8-bit counters): 357283 [0x299c0cc, 0x29f346f), 00:06:21.824 INFO: Loaded 1 PC tables (357283 PCs): 357283 [0x29f3470,0x2f66ea0), 00:06:21.824 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:06:21.824 INFO: A corpus is not provided, starting from an empty corpus 00:06:21.824 #2 INITED exec/s: 0 rss: 64Mb 00:06:21.824 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:21.824 This may also happen if the target rejected all inputs we tried so far 00:06:21.824 [2024-05-16 20:05:08.747044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000092b cdw11:00000000 00:06:21.824 [2024-05-16 20:05:08.747070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.824 NEW_FUNC[1/683]: 0x48e220 in fuzz_admin_delete_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:172 00:06:21.824 NEW_FUNC[2/683]: 0x4bf420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:21.824 #6 NEW cov: 11764 ft: 11774 corp: 2/3b lim: 10 exec/s: 0 rss: 71Mb L: 2/2 MS: 4 ChangeBit-ChangeByte-CopyPart-InsertByte- 00:06:21.824 [2024-05-16 20:05:08.928975] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a2b cdw11:00000000 00:06:21.824 [2024-05-16 20:05:08.929015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.824 NEW_FUNC[1/1]: 0x12e6980 in nvmf_poll_group_poll /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/nvmf.c:150 00:06:21.824 #7 NEW cov: 11903 ft: 12473 corp: 3/5b lim: 10 exec/s: 0 rss: 71Mb L: 2/2 MS: 1 CrossOver- 00:06:22.083 [2024-05-16 20:05:08.989188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ed2b cdw11:00000000 00:06:22.083 [2024-05-16 20:05:08.989216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.083 #8 NEW cov: 11909 ft: 12731 corp: 4/7b lim: 10 exec/s: 0 rss: 71Mb L: 2/2 MS: 1 ChangeByte- 00:06:22.083 [2024-05-16 20:05:09.059311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000aab cdw11:00000000 00:06:22.083 [2024-05-16 20:05:09.059336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.083 #9 NEW cov: 11994 ft: 12951 corp: 5/9b lim: 10 exec/s: 0 rss: 71Mb L: 2/2 MS: 1 ChangeBit- 00:06:22.083 [2024-05-16 20:05:09.109602] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000092b cdw11:00000000 00:06:22.083 [2024-05-16 20:05:09.109627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.083 #10 NEW cov: 11994 ft: 13164 corp: 6/11b lim: 10 exec/s: 0 rss: 71Mb L: 2/2 MS: 1 CrossOver- 00:06:22.083 [2024-05-16 20:05:09.169774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00003d0a cdw11:00000000 00:06:22.083 [2024-05-16 20:05:09.169797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.083 #11 NEW cov: 11994 ft: 13215 corp: 7/14b lim: 10 exec/s: 0 rss: 71Mb L: 3/3 MS: 1 InsertByte- 00:06:22.083 [2024-05-16 20:05:09.220011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000eded cdw11:00000000 00:06:22.083 [2024-05-16 20:05:09.220034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.342 #12 NEW cov: 11994 ft: 13269 corp: 8/16b lim: 10 exec/s: 0 rss: 72Mb L: 2/3 MS: 1 CopyPart- 00:06:22.342 [2024-05-16 20:05:09.280307] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000092b cdw11:00000000 00:06:22.342 [2024-05-16 20:05:09.280330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.342 #13 NEW cov: 11994 ft: 13322 corp: 9/18b lim: 10 exec/s: 0 rss: 72Mb L: 2/3 MS: 1 ShuffleBytes- 00:06:22.342 [2024-05-16 20:05:09.340739] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ed68 cdw11:00000000 00:06:22.342 [2024-05-16 20:05:09.340762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.342 [2024-05-16 20:05:09.340836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00006868 cdw11:00000000 00:06:22.342 [2024-05-16 20:05:09.340849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.342 #15 NEW cov: 11994 ft: 13533 corp: 10/23b lim: 10 exec/s: 0 rss: 72Mb L: 5/5 MS: 2 EraseBytes-InsertRepeatedBytes- 00:06:22.342 [2024-05-16 20:05:09.401567] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00003d0a cdw11:00000000 00:06:22.342 [2024-05-16 20:05:09.401590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.342 [2024-05-16 20:05:09.401664] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00002bd9 cdw11:00000000 00:06:22.342 [2024-05-16 20:05:09.401677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.343 [2024-05-16 20:05:09.401751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000d9d9 cdw11:00000000 00:06:22.343 [2024-05-16 20:05:09.401764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:22.343 [2024-05-16 20:05:09.401831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000d9d9 cdw11:00000000 00:06:22.343 [2024-05-16 20:05:09.401843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:22.343 #16 NEW cov: 11994 ft: 13905 corp: 11/32b lim: 10 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 InsertRepeatedBytes- 00:06:22.343 [2024-05-16 20:05:09.461789] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:22.343 [2024-05-16 20:05:09.461812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.343 [2024-05-16 20:05:09.461881] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:22.343 [2024-05-16 20:05:09.461894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.343 [2024-05-16 20:05:09.461965] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:22.343 [2024-05-16 20:05:09.461976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:22.343 [2024-05-16 20:05:09.462052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ed2b cdw11:00000000 00:06:22.343 [2024-05-16 20:05:09.462065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:22.343 #17 NEW cov: 11994 ft: 13971 corp: 12/40b lim: 10 exec/s: 0 rss: 72Mb L: 8/9 MS: 1 InsertRepeatedBytes- 00:06:22.602 [2024-05-16 20:05:09.511367] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000092b cdw11:00000000 00:06:22.602 [2024-05-16 20:05:09.511392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.602 #18 NEW cov: 11994 ft: 14006 corp: 13/42b lim: 10 exec/s: 0 rss: 72Mb L: 2/9 MS: 1 CrossOver- 00:06:22.602 [2024-05-16 20:05:09.561334] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000edef cdw11:00000000 00:06:22.602 [2024-05-16 20:05:09.561357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.602 #19 NEW cov: 11994 ft: 14017 corp: 14/44b lim: 10 exec/s: 0 rss: 72Mb L: 2/9 MS: 1 ChangeBit- 00:06:22.602 [2024-05-16 20:05:09.611521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000ab2 cdw11:00000000 00:06:22.602 [2024-05-16 20:05:09.611548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.602 NEW_FUNC[1/1]: 0x1a6ef60 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:22.602 #20 NEW cov: 12017 ft: 14080 corp: 15/46b lim: 10 exec/s: 0 rss: 72Mb L: 2/9 MS: 1 ChangeByte- 00:06:22.602 [2024-05-16 20:05:09.671813] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00002d0a cdw11:00000000 00:06:22.602 [2024-05-16 20:05:09.671836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.602 #21 NEW cov: 12017 ft: 14178 corp: 16/49b lim: 10 exec/s: 0 rss: 72Mb L: 3/9 MS: 1 ChangeBit- 00:06:22.602 [2024-05-16 20:05:09.722004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ed2b cdw11:00000000 00:06:22.602 [2024-05-16 20:05:09.722028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.602 #22 NEW cov: 12017 ft: 14191 corp: 17/52b lim: 10 exec/s: 22 rss: 72Mb L: 3/9 MS: 1 CopyPart- 00:06:22.862 [2024-05-16 20:05:09.772119] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000d2b cdw11:00000000 00:06:22.862 [2024-05-16 20:05:09.772145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.862 #23 NEW cov: 12017 ft: 14203 corp: 18/54b lim: 10 exec/s: 23 rss: 72Mb L: 2/9 MS: 1 ChangeBit- 00:06:22.862 [2024-05-16 20:05:09.822700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000d2b cdw11:00000000 00:06:22.862 [2024-05-16 20:05:09.822723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.862 [2024-05-16 20:05:09.822801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000d2b cdw11:00000000 00:06:22.862 [2024-05-16 20:05:09.822815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.862 #24 NEW cov: 12017 ft: 14205 corp: 19/58b lim: 10 exec/s: 24 rss: 72Mb L: 4/9 MS: 1 CopyPart- 00:06:22.862 [2024-05-16 20:05:09.882844] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000b7ff cdw11:00000000 00:06:22.862 [2024-05-16 20:05:09.882866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.862 [2024-05-16 20:05:09.882949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:22.862 [2024-05-16 20:05:09.882961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.862 #26 NEW cov: 12017 ft: 14253 corp: 20/63b lim: 10 exec/s: 26 rss: 72Mb L: 5/9 MS: 2 ChangeByte-InsertRepeatedBytes- 00:06:22.862 [2024-05-16 20:05:09.932881] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000019d4 cdw11:00000000 00:06:22.862 [2024-05-16 20:05:09.932905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.862 #27 NEW cov: 12017 ft: 14314 corp: 21/65b lim: 10 exec/s: 27 rss: 72Mb L: 2/9 MS: 1 ChangeBinInt- 00:06:22.862 [2024-05-16 20:05:09.983144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000ab2 cdw11:00000000 00:06:22.862 [2024-05-16 20:05:09.983168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.121 #33 NEW cov: 12017 ft: 14378 corp: 22/68b lim: 10 exec/s: 33 rss: 72Mb L: 3/9 MS: 1 CopyPart- 00:06:23.121 [2024-05-16 20:05:10.043363] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00002a0a cdw11:00000000 00:06:23.121 [2024-05-16 20:05:10.043393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.121 #34 NEW cov: 12017 ft: 14439 corp: 23/71b lim: 10 exec/s: 34 rss: 72Mb L: 3/9 MS: 1 InsertByte- 00:06:23.121 [2024-05-16 20:05:10.093691] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a2b cdw11:00000000 00:06:23.121 [2024-05-16 20:05:10.093719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.121 #35 NEW cov: 12017 ft: 14461 corp: 24/73b lim: 10 exec/s: 35 rss: 72Mb L: 2/9 MS: 1 ShuffleBytes- 00:06:23.121 [2024-05-16 20:05:10.143887] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000092b cdw11:00000000 00:06:23.121 [2024-05-16 20:05:10.143913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.121 #36 NEW cov: 12017 ft: 14468 corp: 25/75b lim: 10 exec/s: 36 rss: 72Mb L: 2/9 MS: 1 ShuffleBytes- 00:06:23.121 [2024-05-16 20:05:10.195161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:23.121 [2024-05-16 20:05:10.195186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.121 [2024-05-16 20:05:10.195264] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:23.121 [2024-05-16 20:05:10.195277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.121 [2024-05-16 20:05:10.195351] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:23.121 [2024-05-16 20:05:10.195365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:23.121 [2024-05-16 20:05:10.195440] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:23.121 [2024-05-16 20:05:10.195453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:23.121 [2024-05-16 20:05:10.195532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000edef cdw11:00000000 00:06:23.121 [2024-05-16 20:05:10.195546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:23.121 #37 NEW cov: 12017 ft: 14516 corp: 26/85b lim: 10 exec/s: 37 rss: 72Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:06:23.121 [2024-05-16 20:05:10.264416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a2b cdw11:00000000 00:06:23.121 [2024-05-16 20:05:10.264441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.380 #38 NEW cov: 12017 ft: 14544 corp: 27/87b lim: 10 exec/s: 38 rss: 72Mb L: 2/10 MS: 1 ShuffleBytes- 00:06:23.380 [2024-05-16 20:05:10.314536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00002cb2 cdw11:00000000 00:06:23.380 [2024-05-16 20:05:10.314561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.380 #39 NEW cov: 12017 ft: 14567 corp: 28/89b lim: 10 exec/s: 39 rss: 72Mb L: 2/10 MS: 1 ChangeByte- 00:06:23.380 [2024-05-16 20:05:10.364702] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000edab cdw11:00000000 00:06:23.380 [2024-05-16 20:05:10.364730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.380 #40 NEW cov: 12017 ft: 14583 corp: 29/91b lim: 10 exec/s: 40 rss: 72Mb L: 2/10 MS: 1 ChangeBit- 00:06:23.380 [2024-05-16 20:05:10.415514] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:23.380 [2024-05-16 20:05:10.415541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.380 [2024-05-16 20:05:10.415615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000002a cdw11:00000000 00:06:23.380 [2024-05-16 20:05:10.415628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.380 [2024-05-16 20:05:10.415700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000ab2 cdw11:00000000 00:06:23.380 [2024-05-16 20:05:10.415713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:23.380 #41 NEW cov: 12017 ft: 14721 corp: 30/98b lim: 10 exec/s: 41 rss: 73Mb L: 7/10 MS: 1 CrossOver- 00:06:23.380 [2024-05-16 20:05:10.475739] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000b2b7 cdw11:00000000 00:06:23.380 [2024-05-16 20:05:10.475763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.380 [2024-05-16 20:05:10.475837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:23.380 [2024-05-16 20:05:10.475852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.380 [2024-05-16 20:05:10.475923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:23.380 [2024-05-16 20:05:10.475936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:23.380 #42 NEW cov: 12017 ft: 14751 corp: 31/104b lim: 10 exec/s: 42 rss: 73Mb L: 6/10 MS: 1 CrossOver- 00:06:23.638 [2024-05-16 20:05:10.535300] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a2c cdw11:00000000 00:06:23.638 [2024-05-16 20:05:10.535324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.638 #43 NEW cov: 12017 ft: 14776 corp: 32/107b lim: 10 exec/s: 43 rss: 73Mb L: 3/10 MS: 1 InsertByte- 00:06:23.638 [2024-05-16 20:05:10.586518] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ed68 cdw11:00000000 00:06:23.638 [2024-05-16 20:05:10.586542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.638 [2024-05-16 20:05:10.586619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00006868 cdw11:00000000 00:06:23.638 [2024-05-16 20:05:10.586634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.638 [2024-05-16 20:05:10.586709] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00006800 cdw11:00000000 00:06:23.638 [2024-05-16 20:05:10.586723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:23.639 [2024-05-16 20:05:10.586800] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:23.639 [2024-05-16 20:05:10.586813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:23.639 [2024-05-16 20:05:10.586894] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 00:06:23.639 [2024-05-16 20:05:10.586907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:23.639 #44 NEW cov: 12017 ft: 14797 corp: 33/117b lim: 10 exec/s: 44 rss: 73Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:06:23.639 [2024-05-16 20:05:10.645734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a32 cdw11:00000000 00:06:23.639 [2024-05-16 20:05:10.645761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.639 #45 NEW cov: 12017 ft: 14808 corp: 34/119b lim: 10 exec/s: 45 rss: 73Mb L: 2/10 MS: 1 ChangeByte- 00:06:23.639 [2024-05-16 20:05:10.696244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00003d0a cdw11:00000000 00:06:23.639 [2024-05-16 20:05:10.696269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.639 [2024-05-16 20:05:10.696344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000802b cdw11:00000000 00:06:23.639 [2024-05-16 20:05:10.696358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.639 #46 NEW cov: 12017 ft: 14810 corp: 35/123b lim: 10 exec/s: 46 rss: 73Mb L: 4/10 MS: 1 InsertByte- 00:06:23.639 [2024-05-16 20:05:10.746103] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000090a cdw11:00000000 00:06:23.639 [2024-05-16 20:05:10.746128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.639 #47 NEW cov: 12017 ft: 14821 corp: 36/126b lim: 10 exec/s: 23 rss: 73Mb L: 3/10 MS: 1 CrossOver- 00:06:23.639 #47 DONE cov: 12017 ft: 14821 corp: 36/126b lim: 10 exec/s: 23 rss: 73Mb 00:06:23.639 Done 47 runs in 2 second(s) 00:06:23.639 [2024-05-16 20:05:10.779361] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:23.898 20:05:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_7.conf /var/tmp/suppress_nvmf_fuzz 00:06:23.898 20:05:10 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:23.898 20:05:10 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:23.898 20:05:10 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 8 1 0x1 00:06:23.898 20:05:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=8 00:06:23.898 20:05:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:23.898 20:05:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:23.898 20:05:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:06:23.898 20:05:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_8.conf 00:06:23.898 20:05:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:23.898 20:05:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:23.898 20:05:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 8 00:06:23.898 20:05:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4408 00:06:23.898 20:05:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:06:23.898 20:05:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' 00:06:23.898 20:05:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4408"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:23.898 20:05:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:23.898 20:05:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:23.898 20:05:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' -c /tmp/fuzz_json_8.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 -Z 8 00:06:23.898 [2024-05-16 20:05:10.938721] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:23.898 [2024-05-16 20:05:10.938800] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1666355 ] 00:06:23.898 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.157 [2024-05-16 20:05:11.122022] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.157 [2024-05-16 20:05:11.185987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.157 [2024-05-16 20:05:11.244502] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:24.157 [2024-05-16 20:05:11.260437] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:24.157 [2024-05-16 20:05:11.260765] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4408 *** 00:06:24.157 INFO: Running with entropic power schedule (0xFF, 100). 00:06:24.157 INFO: Seed: 435804684 00:06:24.157 INFO: Loaded 1 modules (357283 inline 8-bit counters): 357283 [0x299c0cc, 0x29f346f), 00:06:24.157 INFO: Loaded 1 PC tables (357283 PCs): 357283 [0x29f3470,0x2f66ea0), 00:06:24.157 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:06:24.157 INFO: A corpus is not provided, starting from an empty corpus 00:06:24.417 [2024-05-16 20:05:11.306141] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.417 [2024-05-16 20:05:11.306168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.417 #2 INITED cov: 11801 ft: 11802 corp: 1/1b exec/s: 0 rss: 69Mb 00:06:24.417 [2024-05-16 20:05:11.346406] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.417 [2024-05-16 20:05:11.346430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.417 [2024-05-16 20:05:11.346507] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.417 [2024-05-16 20:05:11.346520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.417 #3 NEW cov: 11931 ft: 13215 corp: 2/3b lim: 5 exec/s: 0 rss: 70Mb L: 2/2 MS: 1 InsertByte- 00:06:24.417 [2024-05-16 20:05:11.396317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.417 [2024-05-16 20:05:11.396340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.417 #4 NEW cov: 11937 ft: 13478 corp: 3/4b lim: 5 exec/s: 0 rss: 70Mb L: 1/2 MS: 1 ChangeByte- 00:06:24.417 [2024-05-16 20:05:11.436439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.417 [2024-05-16 20:05:11.436469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.417 #5 NEW cov: 12022 ft: 13756 corp: 4/5b lim: 5 exec/s: 0 rss: 70Mb L: 1/2 MS: 1 ShuffleBytes- 00:06:24.417 [2024-05-16 20:05:11.476760] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.417 [2024-05-16 20:05:11.476785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.417 [2024-05-16 20:05:11.476841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.417 [2024-05-16 20:05:11.476853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.417 #6 NEW cov: 12022 ft: 13860 corp: 5/7b lim: 5 exec/s: 0 rss: 70Mb L: 2/2 MS: 1 CrossOver- 00:06:24.417 [2024-05-16 20:05:11.526865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.417 [2024-05-16 20:05:11.526889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.417 [2024-05-16 20:05:11.526945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.417 [2024-05-16 20:05:11.526956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.417 #7 NEW cov: 12022 ft: 13906 corp: 6/9b lim: 5 exec/s: 0 rss: 70Mb L: 2/2 MS: 1 CrossOver- 00:06:24.678 [2024-05-16 20:05:11.577011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.678 [2024-05-16 20:05:11.577034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.678 [2024-05-16 20:05:11.577109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.678 [2024-05-16 20:05:11.577122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.678 #8 NEW cov: 12022 ft: 14000 corp: 7/11b lim: 5 exec/s: 0 rss: 70Mb L: 2/2 MS: 1 CrossOver- 00:06:24.678 [2024-05-16 20:05:11.617315] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.678 [2024-05-16 20:05:11.617340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.678 [2024-05-16 20:05:11.617396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.678 [2024-05-16 20:05:11.617411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.678 [2024-05-16 20:05:11.617472] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.678 [2024-05-16 20:05:11.617486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:24.678 #9 NEW cov: 12022 ft: 14244 corp: 8/14b lim: 5 exec/s: 0 rss: 70Mb L: 3/3 MS: 1 CrossOver- 00:06:24.678 [2024-05-16 20:05:11.667461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.678 [2024-05-16 20:05:11.667486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.678 [2024-05-16 20:05:11.667544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.678 [2024-05-16 20:05:11.667556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.678 [2024-05-16 20:05:11.667611] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.678 [2024-05-16 20:05:11.667623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:24.678 #10 NEW cov: 12022 ft: 14288 corp: 9/17b lim: 5 exec/s: 0 rss: 70Mb L: 3/3 MS: 1 CrossOver- 00:06:24.678 [2024-05-16 20:05:11.717604] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.678 [2024-05-16 20:05:11.717632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.678 [2024-05-16 20:05:11.717715] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.678 [2024-05-16 20:05:11.717726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.678 [2024-05-16 20:05:11.717782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.678 [2024-05-16 20:05:11.717793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:24.678 #11 NEW cov: 12022 ft: 14337 corp: 10/20b lim: 5 exec/s: 0 rss: 70Mb L: 3/3 MS: 1 CopyPart- 00:06:24.678 [2024-05-16 20:05:11.757367] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.678 [2024-05-16 20:05:11.757391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.678 #12 NEW cov: 12022 ft: 14377 corp: 11/21b lim: 5 exec/s: 0 rss: 70Mb L: 1/3 MS: 1 ChangeBit- 00:06:24.678 [2024-05-16 20:05:11.797832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.678 [2024-05-16 20:05:11.797856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.678 [2024-05-16 20:05:11.797910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.678 [2024-05-16 20:05:11.797922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.678 [2024-05-16 20:05:11.797992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.678 [2024-05-16 20:05:11.798007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:24.938 #13 NEW cov: 12022 ft: 14465 corp: 12/24b lim: 5 exec/s: 0 rss: 70Mb L: 3/3 MS: 1 ChangeBit- 00:06:24.938 [2024-05-16 20:05:11.847785] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.938 [2024-05-16 20:05:11.847809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.938 [2024-05-16 20:05:11.847881] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.938 [2024-05-16 20:05:11.847893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.938 #14 NEW cov: 12022 ft: 14647 corp: 13/26b lim: 5 exec/s: 0 rss: 70Mb L: 2/3 MS: 1 ChangeBit- 00:06:24.938 [2024-05-16 20:05:11.897711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.938 [2024-05-16 20:05:11.897734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.938 #15 NEW cov: 12022 ft: 14704 corp: 14/27b lim: 5 exec/s: 0 rss: 71Mb L: 1/3 MS: 1 ChangeBit- 00:06:24.938 [2024-05-16 20:05:11.947881] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.938 [2024-05-16 20:05:11.947907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.938 #16 NEW cov: 12022 ft: 14709 corp: 15/28b lim: 5 exec/s: 0 rss: 71Mb L: 1/3 MS: 1 CrossOver- 00:06:24.939 [2024-05-16 20:05:11.987977] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.939 [2024-05-16 20:05:11.988000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.939 #17 NEW cov: 12022 ft: 14744 corp: 16/29b lim: 5 exec/s: 0 rss: 71Mb L: 1/3 MS: 1 ChangeBit- 00:06:24.939 [2024-05-16 20:05:12.038481] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.939 [2024-05-16 20:05:12.038504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.939 [2024-05-16 20:05:12.038577] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.939 [2024-05-16 20:05:12.038589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.939 [2024-05-16 20:05:12.038644] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.939 [2024-05-16 20:05:12.038655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:24.939 #18 NEW cov: 12022 ft: 14760 corp: 17/32b lim: 5 exec/s: 0 rss: 71Mb L: 3/3 MS: 1 CrossOver- 00:06:24.939 [2024-05-16 20:05:12.078238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.939 [2024-05-16 20:05:12.078261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.199 #19 NEW cov: 12022 ft: 14781 corp: 18/33b lim: 5 exec/s: 0 rss: 71Mb L: 1/3 MS: 1 ChangeByte- 00:06:25.199 [2024-05-16 20:05:12.118734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.199 [2024-05-16 20:05:12.118757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.199 [2024-05-16 20:05:12.118831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.199 [2024-05-16 20:05:12.118842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.199 [2024-05-16 20:05:12.118900] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.199 [2024-05-16 20:05:12.118912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.199 #20 NEW cov: 12022 ft: 14824 corp: 19/36b lim: 5 exec/s: 0 rss: 71Mb L: 3/3 MS: 1 CopyPart- 00:06:25.199 [2024-05-16 20:05:12.158861] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.199 [2024-05-16 20:05:12.158884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.199 [2024-05-16 20:05:12.158956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.199 [2024-05-16 20:05:12.158968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.199 [2024-05-16 20:05:12.159026] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.199 [2024-05-16 20:05:12.159037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.199 NEW_FUNC[1/1]: 0x1a6ef60 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:25.199 #21 NEW cov: 12045 ft: 14849 corp: 20/39b lim: 5 exec/s: 0 rss: 72Mb L: 3/3 MS: 1 ChangeByte- 00:06:25.199 [2024-05-16 20:05:12.288752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.199 [2024-05-16 20:05:12.288784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.199 #22 NEW cov: 12045 ft: 14868 corp: 21/40b lim: 5 exec/s: 22 rss: 72Mb L: 1/3 MS: 1 ChangeByte- 00:06:25.199 [2024-05-16 20:05:12.329270] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.199 [2024-05-16 20:05:12.329295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.199 [2024-05-16 20:05:12.329350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.199 [2024-05-16 20:05:12.329361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.199 [2024-05-16 20:05:12.329414] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.199 [2024-05-16 20:05:12.329425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.199 [2024-05-16 20:05:12.329479] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.199 [2024-05-16 20:05:12.329491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:25.463 #23 NEW cov: 12045 ft: 15150 corp: 22/44b lim: 5 exec/s: 23 rss: 72Mb L: 4/4 MS: 1 CrossOver- 00:06:25.463 [2024-05-16 20:05:12.379373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.463 [2024-05-16 20:05:12.379396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.463 [2024-05-16 20:05:12.379449] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.463 [2024-05-16 20:05:12.379466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.463 [2024-05-16 20:05:12.379533] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.463 [2024-05-16 20:05:12.379544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.463 [2024-05-16 20:05:12.379595] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.463 [2024-05-16 20:05:12.379606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:25.463 #24 NEW cov: 12045 ft: 15165 corp: 23/48b lim: 5 exec/s: 24 rss: 72Mb L: 4/4 MS: 1 CrossOver- 00:06:25.463 [2024-05-16 20:05:12.429372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.463 [2024-05-16 20:05:12.429398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.463 [2024-05-16 20:05:12.429450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.463 [2024-05-16 20:05:12.429465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.463 [2024-05-16 20:05:12.429533] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.463 [2024-05-16 20:05:12.429545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.463 #25 NEW cov: 12045 ft: 15182 corp: 24/51b lim: 5 exec/s: 25 rss: 72Mb L: 3/4 MS: 1 CrossOver- 00:06:25.463 [2024-05-16 20:05:12.479224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.463 [2024-05-16 20:05:12.479247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.463 #26 NEW cov: 12045 ft: 15197 corp: 25/52b lim: 5 exec/s: 26 rss: 72Mb L: 1/4 MS: 1 ChangeASCIIInt- 00:06:25.463 [2024-05-16 20:05:12.529836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.463 [2024-05-16 20:05:12.529859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.463 [2024-05-16 20:05:12.529927] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.463 [2024-05-16 20:05:12.529939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.463 [2024-05-16 20:05:12.529991] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.464 [2024-05-16 20:05:12.530002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.464 [2024-05-16 20:05:12.530056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.464 [2024-05-16 20:05:12.530067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:25.464 #27 NEW cov: 12045 ft: 15214 corp: 26/56b lim: 5 exec/s: 27 rss: 72Mb L: 4/4 MS: 1 ChangeBinInt- 00:06:25.464 [2024-05-16 20:05:12.579809] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.464 [2024-05-16 20:05:12.579832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.464 [2024-05-16 20:05:12.579902] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.464 [2024-05-16 20:05:12.579914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.464 [2024-05-16 20:05:12.579967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.464 [2024-05-16 20:05:12.579978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.464 #28 NEW cov: 12045 ft: 15226 corp: 27/59b lim: 5 exec/s: 28 rss: 72Mb L: 3/4 MS: 1 ChangeByte- 00:06:25.729 [2024-05-16 20:05:12.620098] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.729 [2024-05-16 20:05:12.620121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.729 [2024-05-16 20:05:12.620175] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.729 [2024-05-16 20:05:12.620187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.729 [2024-05-16 20:05:12.620239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.729 [2024-05-16 20:05:12.620250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.729 [2024-05-16 20:05:12.620302] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.729 [2024-05-16 20:05:12.620312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:25.729 #29 NEW cov: 12045 ft: 15250 corp: 28/63b lim: 5 exec/s: 29 rss: 72Mb L: 4/4 MS: 1 CopyPart- 00:06:25.729 [2024-05-16 20:05:12.670075] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.729 [2024-05-16 20:05:12.670097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.729 [2024-05-16 20:05:12.670167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.729 [2024-05-16 20:05:12.670179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.729 [2024-05-16 20:05:12.670231] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.729 [2024-05-16 20:05:12.670242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.729 #30 NEW cov: 12045 ft: 15269 corp: 29/66b lim: 5 exec/s: 30 rss: 72Mb L: 3/4 MS: 1 ChangeByte- 00:06:25.729 [2024-05-16 20:05:12.709886] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.729 [2024-05-16 20:05:12.709909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.729 #31 NEW cov: 12045 ft: 15275 corp: 30/67b lim: 5 exec/s: 31 rss: 72Mb L: 1/4 MS: 1 EraseBytes- 00:06:25.729 [2024-05-16 20:05:12.750192] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.729 [2024-05-16 20:05:12.750214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.729 [2024-05-16 20:05:12.750283] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.729 [2024-05-16 20:05:12.750295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.729 #32 NEW cov: 12045 ft: 15276 corp: 31/69b lim: 5 exec/s: 32 rss: 73Mb L: 2/4 MS: 1 ShuffleBytes- 00:06:25.729 [2024-05-16 20:05:12.790599] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.729 [2024-05-16 20:05:12.790625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.729 [2024-05-16 20:05:12.790680] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.729 [2024-05-16 20:05:12.790692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.729 [2024-05-16 20:05:12.790742] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.729 [2024-05-16 20:05:12.790753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.729 [2024-05-16 20:05:12.790804] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.729 [2024-05-16 20:05:12.790816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:25.729 #33 NEW cov: 12045 ft: 15281 corp: 32/73b lim: 5 exec/s: 33 rss: 73Mb L: 4/4 MS: 1 CopyPart- 00:06:25.729 [2024-05-16 20:05:12.830265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.729 [2024-05-16 20:05:12.830290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.729 #34 NEW cov: 12045 ft: 15283 corp: 33/74b lim: 5 exec/s: 34 rss: 73Mb L: 1/4 MS: 1 ChangeBit- 00:06:25.729 [2024-05-16 20:05:12.870728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.729 [2024-05-16 20:05:12.870752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.729 [2024-05-16 20:05:12.870805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.729 [2024-05-16 20:05:12.870816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.729 [2024-05-16 20:05:12.870867] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.729 [2024-05-16 20:05:12.870878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.989 #35 NEW cov: 12045 ft: 15287 corp: 34/77b lim: 5 exec/s: 35 rss: 73Mb L: 3/4 MS: 1 ShuffleBytes- 00:06:25.989 [2024-05-16 20:05:12.910503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.989 [2024-05-16 20:05:12.910527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.989 #36 NEW cov: 12045 ft: 15294 corp: 35/78b lim: 5 exec/s: 36 rss: 73Mb L: 1/4 MS: 1 ShuffleBytes- 00:06:25.989 [2024-05-16 20:05:12.951108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.989 [2024-05-16 20:05:12.951131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.989 [2024-05-16 20:05:12.951182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.989 [2024-05-16 20:05:12.951196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.989 [2024-05-16 20:05:12.951244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.989 [2024-05-16 20:05:12.951254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.989 [2024-05-16 20:05:12.951302] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.989 [2024-05-16 20:05:12.951312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:25.989 #37 NEW cov: 12045 ft: 15295 corp: 36/82b lim: 5 exec/s: 37 rss: 73Mb L: 4/4 MS: 1 CopyPart- 00:06:25.989 [2024-05-16 20:05:12.990866] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.989 [2024-05-16 20:05:12.990889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.989 [2024-05-16 20:05:12.990940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.989 [2024-05-16 20:05:12.990951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.989 #38 NEW cov: 12045 ft: 15314 corp: 37/84b lim: 5 exec/s: 38 rss: 73Mb L: 2/4 MS: 1 ShuffleBytes- 00:06:25.989 [2024-05-16 20:05:13.041206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.989 [2024-05-16 20:05:13.041230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.989 [2024-05-16 20:05:13.041281] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.989 [2024-05-16 20:05:13.041292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.990 [2024-05-16 20:05:13.041340] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.990 [2024-05-16 20:05:13.041350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.990 #39 NEW cov: 12045 ft: 15321 corp: 38/87b lim: 5 exec/s: 39 rss: 73Mb L: 3/4 MS: 1 ChangeByte- 00:06:25.990 [2024-05-16 20:05:13.081120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.990 [2024-05-16 20:05:13.081143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.990 [2024-05-16 20:05:13.081210] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.990 [2024-05-16 20:05:13.081222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.990 #40 NEW cov: 12045 ft: 15332 corp: 39/89b lim: 5 exec/s: 40 rss: 73Mb L: 2/4 MS: 1 ChangeByte- 00:06:25.990 [2024-05-16 20:05:13.121744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.990 [2024-05-16 20:05:13.121768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.990 [2024-05-16 20:05:13.121819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.990 [2024-05-16 20:05:13.121833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.990 [2024-05-16 20:05:13.121882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.990 [2024-05-16 20:05:13.121892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.990 [2024-05-16 20:05:13.121942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.990 [2024-05-16 20:05:13.121952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:25.990 [2024-05-16 20:05:13.122000] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.990 [2024-05-16 20:05:13.122010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:26.250 #41 NEW cov: 12045 ft: 15413 corp: 40/94b lim: 5 exec/s: 41 rss: 73Mb L: 5/5 MS: 1 CMP- DE: "\005\000"- 00:06:26.250 [2024-05-16 20:05:13.161557] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.250 [2024-05-16 20:05:13.161580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.250 [2024-05-16 20:05:13.161632] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.250 [2024-05-16 20:05:13.161643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.250 [2024-05-16 20:05:13.161694] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.250 [2024-05-16 20:05:13.161705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:26.250 #42 NEW cov: 12045 ft: 15428 corp: 41/97b lim: 5 exec/s: 42 rss: 73Mb L: 3/5 MS: 1 PersAutoDict- DE: "\005\000"- 00:06:26.250 [2024-05-16 20:05:13.211683] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.250 [2024-05-16 20:05:13.211706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.250 [2024-05-16 20:05:13.211759] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.250 [2024-05-16 20:05:13.211769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.250 [2024-05-16 20:05:13.211819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.250 [2024-05-16 20:05:13.211829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:26.250 #43 NEW cov: 12045 ft: 15495 corp: 42/100b lim: 5 exec/s: 43 rss: 73Mb L: 3/5 MS: 1 ChangeBinInt- 00:06:26.250 [2024-05-16 20:05:13.261666] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.250 [2024-05-16 20:05:13.261689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.250 [2024-05-16 20:05:13.261760] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.250 [2024-05-16 20:05:13.261771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.250 #44 NEW cov: 12045 ft: 15545 corp: 43/102b lim: 5 exec/s: 44 rss: 73Mb L: 2/5 MS: 1 EraseBytes- 00:06:26.250 [2024-05-16 20:05:13.301933] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.250 [2024-05-16 20:05:13.301955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.250 [2024-05-16 20:05:13.302022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.250 [2024-05-16 20:05:13.302034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.250 [2024-05-16 20:05:13.302085] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.250 [2024-05-16 20:05:13.302096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:26.250 #45 NEW cov: 12045 ft: 15561 corp: 44/105b lim: 5 exec/s: 22 rss: 73Mb L: 3/5 MS: 1 ShuffleBytes- 00:06:26.250 #45 DONE cov: 12045 ft: 15561 corp: 44/105b lim: 5 exec/s: 22 rss: 73Mb 00:06:26.250 ###### Recommended dictionary. ###### 00:06:26.250 "\005\000" # Uses: 1 00:06:26.250 ###### End of recommended dictionary. ###### 00:06:26.250 Done 45 runs in 2 second(s) 00:06:26.250 [2024-05-16 20:05:13.322614] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:26.510 20:05:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_8.conf /var/tmp/suppress_nvmf_fuzz 00:06:26.510 20:05:13 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:26.510 20:05:13 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:26.510 20:05:13 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 9 1 0x1 00:06:26.510 20:05:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=9 00:06:26.510 20:05:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:26.510 20:05:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:26.510 20:05:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:06:26.510 20:05:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_9.conf 00:06:26.510 20:05:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:26.510 20:05:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:26.510 20:05:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 9 00:06:26.510 20:05:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4409 00:06:26.510 20:05:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:06:26.510 20:05:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' 00:06:26.510 20:05:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4409"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:26.510 20:05:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:26.510 20:05:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:26.510 20:05:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' -c /tmp/fuzz_json_9.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 -Z 9 00:06:26.510 [2024-05-16 20:05:13.492265] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:26.510 [2024-05-16 20:05:13.492341] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1666796 ] 00:06:26.510 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.769 [2024-05-16 20:05:13.670793] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.769 [2024-05-16 20:05:13.734696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.769 [2024-05-16 20:05:13.792995] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:26.769 [2024-05-16 20:05:13.808959] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:26.769 [2024-05-16 20:05:13.809289] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4409 *** 00:06:26.769 INFO: Running with entropic power schedule (0xFF, 100). 00:06:26.769 INFO: Seed: 2983802571 00:06:26.769 INFO: Loaded 1 modules (357283 inline 8-bit counters): 357283 [0x299c0cc, 0x29f346f), 00:06:26.769 INFO: Loaded 1 PC tables (357283 PCs): 357283 [0x29f3470,0x2f66ea0), 00:06:26.769 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:06:26.769 INFO: A corpus is not provided, starting from an empty corpus 00:06:26.769 [2024-05-16 20:05:13.854584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.769 [2024-05-16 20:05:13.854613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.769 #2 INITED cov: 11801 ft: 11798 corp: 1/1b exec/s: 0 rss: 70Mb 00:06:26.769 [2024-05-16 20:05:13.895282] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.769 [2024-05-16 20:05:13.895306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.769 [2024-05-16 20:05:13.895362] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.769 [2024-05-16 20:05:13.895374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.769 [2024-05-16 20:05:13.895425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.769 [2024-05-16 20:05:13.895436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:26.769 [2024-05-16 20:05:13.895492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.769 [2024-05-16 20:05:13.895503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:26.769 [2024-05-16 20:05:13.895554] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.769 [2024-05-16 20:05:13.895566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:27.028 #3 NEW cov: 11931 ft: 13357 corp: 2/6b lim: 5 exec/s: 0 rss: 70Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:06:27.029 [2024-05-16 20:05:13.944791] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.029 [2024-05-16 20:05:13.944816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.029 #4 NEW cov: 11937 ft: 13506 corp: 3/7b lim: 5 exec/s: 0 rss: 70Mb L: 1/5 MS: 1 ShuffleBytes- 00:06:27.029 [2024-05-16 20:05:13.984880] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.029 [2024-05-16 20:05:13.984904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.029 #5 NEW cov: 12022 ft: 13763 corp: 4/8b lim: 5 exec/s: 0 rss: 70Mb L: 1/5 MS: 1 CrossOver- 00:06:27.029 [2024-05-16 20:05:14.025549] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.029 [2024-05-16 20:05:14.025572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.029 [2024-05-16 20:05:14.025652] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.029 [2024-05-16 20:05:14.025664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.029 [2024-05-16 20:05:14.025716] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.029 [2024-05-16 20:05:14.025726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.029 [2024-05-16 20:05:14.025779] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.029 [2024-05-16 20:05:14.025790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:27.029 #6 NEW cov: 12022 ft: 13824 corp: 5/12b lim: 5 exec/s: 0 rss: 70Mb L: 4/5 MS: 1 InsertRepeatedBytes- 00:06:27.029 [2024-05-16 20:05:14.075146] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.029 [2024-05-16 20:05:14.075168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.029 #7 NEW cov: 12022 ft: 13883 corp: 6/13b lim: 5 exec/s: 0 rss: 70Mb L: 1/5 MS: 1 ShuffleBytes- 00:06:27.029 [2024-05-16 20:05:14.115931] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.029 [2024-05-16 20:05:14.115953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.029 [2024-05-16 20:05:14.116023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.029 [2024-05-16 20:05:14.116035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.029 [2024-05-16 20:05:14.116088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.029 [2024-05-16 20:05:14.116099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.029 [2024-05-16 20:05:14.116148] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.029 [2024-05-16 20:05:14.116158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:27.029 [2024-05-16 20:05:14.116214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.029 [2024-05-16 20:05:14.116225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:27.029 #8 NEW cov: 12022 ft: 13967 corp: 7/18b lim: 5 exec/s: 0 rss: 71Mb L: 5/5 MS: 1 ChangeByte- 00:06:27.029 [2024-05-16 20:05:14.165375] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.029 [2024-05-16 20:05:14.165397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.288 #9 NEW cov: 12022 ft: 14114 corp: 8/19b lim: 5 exec/s: 0 rss: 71Mb L: 1/5 MS: 1 ShuffleBytes- 00:06:27.288 [2024-05-16 20:05:14.216065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.288 [2024-05-16 20:05:14.216089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.288 [2024-05-16 20:05:14.216159] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.288 [2024-05-16 20:05:14.216171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.288 [2024-05-16 20:05:14.216224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.288 [2024-05-16 20:05:14.216235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.288 [2024-05-16 20:05:14.216287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.288 [2024-05-16 20:05:14.216299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:27.288 #10 NEW cov: 12022 ft: 14169 corp: 9/23b lim: 5 exec/s: 0 rss: 71Mb L: 4/5 MS: 1 EraseBytes- 00:06:27.288 [2024-05-16 20:05:14.255654] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.288 [2024-05-16 20:05:14.255678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.288 #11 NEW cov: 12022 ft: 14274 corp: 10/24b lim: 5 exec/s: 0 rss: 71Mb L: 1/5 MS: 1 ChangeBit- 00:06:27.288 [2024-05-16 20:05:14.296430] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.288 [2024-05-16 20:05:14.296453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.288 [2024-05-16 20:05:14.296526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.289 [2024-05-16 20:05:14.296538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.289 [2024-05-16 20:05:14.296589] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.289 [2024-05-16 20:05:14.296600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.289 [2024-05-16 20:05:14.296652] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.289 [2024-05-16 20:05:14.296667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:27.289 [2024-05-16 20:05:14.296732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.289 [2024-05-16 20:05:14.296743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:27.289 #12 NEW cov: 12022 ft: 14306 corp: 11/29b lim: 5 exec/s: 0 rss: 71Mb L: 5/5 MS: 1 InsertByte- 00:06:27.289 [2024-05-16 20:05:14.346343] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.289 [2024-05-16 20:05:14.346366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.289 [2024-05-16 20:05:14.346422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.289 [2024-05-16 20:05:14.346433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.289 [2024-05-16 20:05:14.346503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.289 [2024-05-16 20:05:14.346515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.289 [2024-05-16 20:05:14.346567] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.289 [2024-05-16 20:05:14.346579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:27.289 #13 NEW cov: 12022 ft: 14326 corp: 12/33b lim: 5 exec/s: 0 rss: 71Mb L: 4/5 MS: 1 InsertRepeatedBytes- 00:06:27.289 [2024-05-16 20:05:14.386203] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.289 [2024-05-16 20:05:14.386227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.289 [2024-05-16 20:05:14.386297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.289 [2024-05-16 20:05:14.386309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.289 #14 NEW cov: 12022 ft: 14524 corp: 13/35b lim: 5 exec/s: 0 rss: 71Mb L: 2/5 MS: 1 InsertByte- 00:06:27.289 [2024-05-16 20:05:14.426305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.289 [2024-05-16 20:05:14.426329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.289 [2024-05-16 20:05:14.426382] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.289 [2024-05-16 20:05:14.426393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.548 #15 NEW cov: 12022 ft: 14597 corp: 14/37b lim: 5 exec/s: 0 rss: 71Mb L: 2/5 MS: 1 CrossOver- 00:06:27.548 [2024-05-16 20:05:14.476468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.548 [2024-05-16 20:05:14.476492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.548 [2024-05-16 20:05:14.476547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.548 [2024-05-16 20:05:14.476557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.548 #16 NEW cov: 12022 ft: 14658 corp: 15/39b lim: 5 exec/s: 0 rss: 71Mb L: 2/5 MS: 1 InsertByte- 00:06:27.548 [2024-05-16 20:05:14.526966] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.548 [2024-05-16 20:05:14.526990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.548 [2024-05-16 20:05:14.527043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.548 [2024-05-16 20:05:14.527055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.549 [2024-05-16 20:05:14.527107] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.549 [2024-05-16 20:05:14.527117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.549 [2024-05-16 20:05:14.527166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.549 [2024-05-16 20:05:14.527177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:27.549 #17 NEW cov: 12022 ft: 14675 corp: 16/43b lim: 5 exec/s: 0 rss: 71Mb L: 4/5 MS: 1 InsertRepeatedBytes- 00:06:27.549 [2024-05-16 20:05:14.566832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.549 [2024-05-16 20:05:14.566855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.549 [2024-05-16 20:05:14.566927] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.549 [2024-05-16 20:05:14.566938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.549 [2024-05-16 20:05:14.566990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.549 [2024-05-16 20:05:14.567002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.549 #18 NEW cov: 12022 ft: 14840 corp: 17/46b lim: 5 exec/s: 0 rss: 72Mb L: 3/5 MS: 1 CrossOver- 00:06:27.549 [2024-05-16 20:05:14.617333] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.549 [2024-05-16 20:05:14.617356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.549 [2024-05-16 20:05:14.617409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.549 [2024-05-16 20:05:14.617420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.549 [2024-05-16 20:05:14.617491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.549 [2024-05-16 20:05:14.617502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.549 [2024-05-16 20:05:14.617560] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.549 [2024-05-16 20:05:14.617571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:27.549 [2024-05-16 20:05:14.617622] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.549 [2024-05-16 20:05:14.617632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:27.549 #19 NEW cov: 12022 ft: 14851 corp: 18/51b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 CopyPart- 00:06:27.549 [2024-05-16 20:05:14.667505] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.549 [2024-05-16 20:05:14.667527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.549 [2024-05-16 20:05:14.667583] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.549 [2024-05-16 20:05:14.667594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.549 [2024-05-16 20:05:14.667647] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.549 [2024-05-16 20:05:14.667657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.549 [2024-05-16 20:05:14.667712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.549 [2024-05-16 20:05:14.667722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:27.549 [2024-05-16 20:05:14.667775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.549 [2024-05-16 20:05:14.667786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:27.808 #20 NEW cov: 12022 ft: 14879 corp: 19/56b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 CrossOver- 00:06:27.808 [2024-05-16 20:05:14.717621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.808 [2024-05-16 20:05:14.717644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.808 [2024-05-16 20:05:14.717696] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.808 [2024-05-16 20:05:14.717708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.808 [2024-05-16 20:05:14.717761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.808 [2024-05-16 20:05:14.717772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.808 [2024-05-16 20:05:14.717827] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.808 [2024-05-16 20:05:14.717838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:27.808 [2024-05-16 20:05:14.717893] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.808 [2024-05-16 20:05:14.717905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:27.808 NEW_FUNC[1/1]: 0x1a6ef60 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:27.808 #21 NEW cov: 12045 ft: 14930 corp: 20/61b lim: 5 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 ShuffleBytes- 00:06:27.808 [2024-05-16 20:05:14.848392] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.808 [2024-05-16 20:05:14.848435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.808 [2024-05-16 20:05:14.848529] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.808 [2024-05-16 20:05:14.848547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.808 [2024-05-16 20:05:14.848613] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.808 [2024-05-16 20:05:14.848629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.808 [2024-05-16 20:05:14.848694] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.808 [2024-05-16 20:05:14.848709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:27.808 [2024-05-16 20:05:14.848777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.808 [2024-05-16 20:05:14.848791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:27.808 #22 NEW cov: 12045 ft: 14968 corp: 21/66b lim: 5 exec/s: 22 rss: 73Mb L: 5/5 MS: 1 InsertByte- 00:06:27.808 [2024-05-16 20:05:14.897671] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.809 [2024-05-16 20:05:14.897695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.809 [2024-05-16 20:05:14.897769] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.809 [2024-05-16 20:05:14.897781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.809 #23 NEW cov: 12045 ft: 15047 corp: 22/68b lim: 5 exec/s: 23 rss: 73Mb L: 2/5 MS: 1 CopyPart- 00:06:27.809 [2024-05-16 20:05:14.937941] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.809 [2024-05-16 20:05:14.937965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.809 [2024-05-16 20:05:14.938023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.809 [2024-05-16 20:05:14.938035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.809 [2024-05-16 20:05:14.938091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.809 [2024-05-16 20:05:14.938106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.068 #24 NEW cov: 12045 ft: 15057 corp: 23/71b lim: 5 exec/s: 24 rss: 73Mb L: 3/5 MS: 1 EraseBytes- 00:06:28.068 [2024-05-16 20:05:14.978321] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.068 [2024-05-16 20:05:14.978345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.068 [2024-05-16 20:05:14.978403] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.068 [2024-05-16 20:05:14.978414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.068 [2024-05-16 20:05:14.978476] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.068 [2024-05-16 20:05:14.978488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.068 [2024-05-16 20:05:14.978547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.068 [2024-05-16 20:05:14.978558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.068 #25 NEW cov: 12045 ft: 15065 corp: 24/75b lim: 5 exec/s: 25 rss: 73Mb L: 4/5 MS: 1 ChangeBit- 00:06:28.068 [2024-05-16 20:05:15.028222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.068 [2024-05-16 20:05:15.028245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.068 [2024-05-16 20:05:15.028302] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.068 [2024-05-16 20:05:15.028314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.068 [2024-05-16 20:05:15.028369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.068 [2024-05-16 20:05:15.028379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.068 #26 NEW cov: 12045 ft: 15081 corp: 25/78b lim: 5 exec/s: 26 rss: 73Mb L: 3/5 MS: 1 ChangeBit- 00:06:28.068 [2024-05-16 20:05:15.078563] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.068 [2024-05-16 20:05:15.078585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.068 [2024-05-16 20:05:15.078660] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.068 [2024-05-16 20:05:15.078672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.068 [2024-05-16 20:05:15.078726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.068 [2024-05-16 20:05:15.078737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.068 [2024-05-16 20:05:15.078793] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.068 [2024-05-16 20:05:15.078806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.068 #27 NEW cov: 12045 ft: 15099 corp: 26/82b lim: 5 exec/s: 27 rss: 73Mb L: 4/5 MS: 1 ChangeByte- 00:06:28.068 [2024-05-16 20:05:15.118497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.068 [2024-05-16 20:05:15.118519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.068 [2024-05-16 20:05:15.118592] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.068 [2024-05-16 20:05:15.118604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.068 [2024-05-16 20:05:15.118657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.068 [2024-05-16 20:05:15.118669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.068 #28 NEW cov: 12045 ft: 15104 corp: 27/85b lim: 5 exec/s: 28 rss: 73Mb L: 3/5 MS: 1 EraseBytes- 00:06:28.068 [2024-05-16 20:05:15.158435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.068 [2024-05-16 20:05:15.158462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.068 [2024-05-16 20:05:15.158537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.068 [2024-05-16 20:05:15.158549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.068 #29 NEW cov: 12045 ft: 15151 corp: 28/87b lim: 5 exec/s: 29 rss: 73Mb L: 2/5 MS: 1 EraseBytes- 00:06:28.068 [2024-05-16 20:05:15.208952] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.068 [2024-05-16 20:05:15.208975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.068 [2024-05-16 20:05:15.209033] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.068 [2024-05-16 20:05:15.209045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.068 [2024-05-16 20:05:15.209100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.068 [2024-05-16 20:05:15.209110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.068 [2024-05-16 20:05:15.209167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.068 [2024-05-16 20:05:15.209177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.328 #30 NEW cov: 12045 ft: 15153 corp: 29/91b lim: 5 exec/s: 30 rss: 73Mb L: 4/5 MS: 1 CopyPart- 00:06:28.328 [2024-05-16 20:05:15.259274] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.328 [2024-05-16 20:05:15.259296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.328 [2024-05-16 20:05:15.259373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.328 [2024-05-16 20:05:15.259385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.328 [2024-05-16 20:05:15.259440] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.328 [2024-05-16 20:05:15.259451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.328 [2024-05-16 20:05:15.259512] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.328 [2024-05-16 20:05:15.259523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.328 [2024-05-16 20:05:15.259581] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.328 [2024-05-16 20:05:15.259591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:28.328 #31 NEW cov: 12045 ft: 15163 corp: 30/96b lim: 5 exec/s: 31 rss: 73Mb L: 5/5 MS: 1 ShuffleBytes- 00:06:28.328 [2024-05-16 20:05:15.298880] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.328 [2024-05-16 20:05:15.298903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.328 [2024-05-16 20:05:15.298959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.328 [2024-05-16 20:05:15.298971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.329 #32 NEW cov: 12045 ft: 15178 corp: 31/98b lim: 5 exec/s: 32 rss: 73Mb L: 2/5 MS: 1 CrossOver- 00:06:28.329 [2024-05-16 20:05:15.348978] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.329 [2024-05-16 20:05:15.349001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.329 [2024-05-16 20:05:15.349074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.329 [2024-05-16 20:05:15.349086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.329 #33 NEW cov: 12045 ft: 15213 corp: 32/100b lim: 5 exec/s: 33 rss: 73Mb L: 2/5 MS: 1 CopyPart- 00:06:28.329 [2024-05-16 20:05:15.389626] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.329 [2024-05-16 20:05:15.389649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.329 [2024-05-16 20:05:15.389723] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.329 [2024-05-16 20:05:15.389735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.329 [2024-05-16 20:05:15.389792] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.329 [2024-05-16 20:05:15.389806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.329 [2024-05-16 20:05:15.389860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.329 [2024-05-16 20:05:15.389871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.329 [2024-05-16 20:05:15.389927] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.329 [2024-05-16 20:05:15.389938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:28.329 #34 NEW cov: 12045 ft: 15219 corp: 33/105b lim: 5 exec/s: 34 rss: 73Mb L: 5/5 MS: 1 ShuffleBytes- 00:06:28.329 [2024-05-16 20:05:15.429720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.329 [2024-05-16 20:05:15.429742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.329 [2024-05-16 20:05:15.429816] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.329 [2024-05-16 20:05:15.429828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.329 [2024-05-16 20:05:15.429887] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.329 [2024-05-16 20:05:15.429897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.329 [2024-05-16 20:05:15.429957] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.329 [2024-05-16 20:05:15.429967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.329 [2024-05-16 20:05:15.430024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.329 [2024-05-16 20:05:15.430035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:28.329 #35 NEW cov: 12045 ft: 15233 corp: 34/110b lim: 5 exec/s: 35 rss: 74Mb L: 5/5 MS: 1 ChangeByte- 00:06:28.589 [2024-05-16 20:05:15.479926] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.589 [2024-05-16 20:05:15.479960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.589 [2024-05-16 20:05:15.480035] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.589 [2024-05-16 20:05:15.480047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.589 [2024-05-16 20:05:15.480105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.589 [2024-05-16 20:05:15.480117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.589 [2024-05-16 20:05:15.480174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.589 [2024-05-16 20:05:15.480188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.589 [2024-05-16 20:05:15.480243] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.589 [2024-05-16 20:05:15.480254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:28.589 #36 NEW cov: 12045 ft: 15272 corp: 35/115b lim: 5 exec/s: 36 rss: 74Mb L: 5/5 MS: 1 ChangeBit- 00:06:28.589 [2024-05-16 20:05:15.529557] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.589 [2024-05-16 20:05:15.529580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.589 [2024-05-16 20:05:15.529654] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.589 [2024-05-16 20:05:15.529666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.589 #37 NEW cov: 12045 ft: 15288 corp: 36/117b lim: 5 exec/s: 37 rss: 74Mb L: 2/5 MS: 1 ChangeByte- 00:06:28.589 [2024-05-16 20:05:15.580075] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.589 [2024-05-16 20:05:15.580098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.589 [2024-05-16 20:05:15.580175] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.589 [2024-05-16 20:05:15.580186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.589 [2024-05-16 20:05:15.580242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.589 [2024-05-16 20:05:15.580253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.589 [2024-05-16 20:05:15.580308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.589 [2024-05-16 20:05:15.580319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.589 #38 NEW cov: 12045 ft: 15296 corp: 37/121b lim: 5 exec/s: 38 rss: 74Mb L: 4/5 MS: 1 ChangeByte- 00:06:28.589 [2024-05-16 20:05:15.630182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.589 [2024-05-16 20:05:15.630206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.589 [2024-05-16 20:05:15.630262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.589 [2024-05-16 20:05:15.630274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.589 [2024-05-16 20:05:15.630332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.589 [2024-05-16 20:05:15.630343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.589 [2024-05-16 20:05:15.630399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.589 [2024-05-16 20:05:15.630413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.589 #39 NEW cov: 12045 ft: 15334 corp: 38/125b lim: 5 exec/s: 39 rss: 74Mb L: 4/5 MS: 1 ShuffleBytes- 00:06:28.589 [2024-05-16 20:05:15.670493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.589 [2024-05-16 20:05:15.670516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.589 [2024-05-16 20:05:15.670589] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.589 [2024-05-16 20:05:15.670600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.589 [2024-05-16 20:05:15.670656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.589 [2024-05-16 20:05:15.670667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.589 [2024-05-16 20:05:15.670725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.589 [2024-05-16 20:05:15.670735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.589 [2024-05-16 20:05:15.670791] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.589 [2024-05-16 20:05:15.670802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:28.589 #40 NEW cov: 12045 ft: 15354 corp: 39/130b lim: 5 exec/s: 40 rss: 74Mb L: 5/5 MS: 1 ShuffleBytes- 00:06:28.589 [2024-05-16 20:05:15.720658] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.589 [2024-05-16 20:05:15.720681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.589 [2024-05-16 20:05:15.720739] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.589 [2024-05-16 20:05:15.720750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.589 [2024-05-16 20:05:15.720825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.589 [2024-05-16 20:05:15.720836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.589 [2024-05-16 20:05:15.720894] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.589 [2024-05-16 20:05:15.720905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.589 [2024-05-16 20:05:15.720963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.589 [2024-05-16 20:05:15.720974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:28.850 #41 NEW cov: 12045 ft: 15361 corp: 40/135b lim: 5 exec/s: 41 rss: 74Mb L: 5/5 MS: 1 CrossOver- 00:06:28.850 [2024-05-16 20:05:15.760589] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.850 [2024-05-16 20:05:15.760615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.850 [2024-05-16 20:05:15.760702] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.850 [2024-05-16 20:05:15.760713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.850 [2024-05-16 20:05:15.760767] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.850 [2024-05-16 20:05:15.760778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.850 [2024-05-16 20:05:15.760835] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.850 [2024-05-16 20:05:15.760846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.850 #42 NEW cov: 12045 ft: 15368 corp: 41/139b lim: 5 exec/s: 42 rss: 74Mb L: 4/5 MS: 1 ChangeByte- 00:06:28.850 [2024-05-16 20:05:15.800136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.850 [2024-05-16 20:05:15.800160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.850 #43 NEW cov: 12045 ft: 15424 corp: 42/140b lim: 5 exec/s: 43 rss: 74Mb L: 1/5 MS: 1 ShuffleBytes- 00:06:28.850 [2024-05-16 20:05:15.840995] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.850 [2024-05-16 20:05:15.841020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.850 [2024-05-16 20:05:15.841075] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.850 [2024-05-16 20:05:15.841086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.850 [2024-05-16 20:05:15.841140] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.850 [2024-05-16 20:05:15.841151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.850 [2024-05-16 20:05:15.841202] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.850 [2024-05-16 20:05:15.841212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.850 [2024-05-16 20:05:15.841269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.850 [2024-05-16 20:05:15.841280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:28.850 #44 NEW cov: 12045 ft: 15440 corp: 43/145b lim: 5 exec/s: 22 rss: 74Mb L: 5/5 MS: 1 ShuffleBytes- 00:06:28.850 #44 DONE cov: 12045 ft: 15440 corp: 43/145b lim: 5 exec/s: 22 rss: 74Mb 00:06:28.850 Done 44 runs in 2 second(s) 00:06:28.850 [2024-05-16 20:05:15.863030] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:28.850 20:05:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_9.conf /var/tmp/suppress_nvmf_fuzz 00:06:28.850 20:05:15 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:28.850 20:05:15 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:28.850 20:05:15 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 10 1 0x1 00:06:28.850 20:05:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=10 00:06:28.850 20:05:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:28.850 20:05:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:28.850 20:05:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:06:28.850 20:05:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_10.conf 00:06:28.850 20:05:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:29.108 20:05:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:29.108 20:05:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 10 00:06:29.108 20:05:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4410 00:06:29.109 20:05:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:06:29.109 20:05:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' 00:06:29.109 20:05:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4410"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:29.109 20:05:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:29.109 20:05:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:29.109 20:05:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' -c /tmp/fuzz_json_10.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 -Z 10 00:06:29.109 [2024-05-16 20:05:16.031442] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:29.109 [2024-05-16 20:05:16.031508] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1667245 ] 00:06:29.109 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.109 [2024-05-16 20:05:16.199322] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.387 [2024-05-16 20:05:16.263687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.387 [2024-05-16 20:05:16.322263] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:29.387 [2024-05-16 20:05:16.338225] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:29.387 [2024-05-16 20:05:16.338575] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4410 *** 00:06:29.387 INFO: Running with entropic power schedule (0xFF, 100). 00:06:29.387 INFO: Seed: 1217835526 00:06:29.387 INFO: Loaded 1 modules (357283 inline 8-bit counters): 357283 [0x299c0cc, 0x29f346f), 00:06:29.387 INFO: Loaded 1 PC tables (357283 PCs): 357283 [0x29f3470,0x2f66ea0), 00:06:29.387 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:06:29.387 INFO: A corpus is not provided, starting from an empty corpus 00:06:29.387 #2 INITED exec/s: 0 rss: 63Mb 00:06:29.387 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:29.387 This may also happen if the target rejected all inputs we tried so far 00:06:29.387 [2024-05-16 20:05:16.383725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a7b7b7b cdw11:7b7b7b7b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.387 [2024-05-16 20:05:16.383749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.387 NEW_FUNC[1/683]: 0x48fb90 in fuzz_admin_security_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:205 00:06:29.387 NEW_FUNC[2/683]: 0x4bf420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:29.387 #4 NEW cov: 11812 ft: 11813 corp: 2/9b lim: 40 exec/s: 0 rss: 70Mb L: 8/8 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:06:29.697 [2024-05-16 20:05:16.534406] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:e7e7e7e7 cdw11:e7e7e7e7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.697 [2024-05-16 20:05:16.534438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.697 [2024-05-16 20:05:16.534497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:e7e7e7e7 cdw11:e7e7e7e7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.697 [2024-05-16 20:05:16.534509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.697 [2024-05-16 20:05:16.534563] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:e7e7e7e7 cdw11:e7e7e70a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.698 [2024-05-16 20:05:16.534573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.698 NEW_FUNC[1/2]: 0xff1120 in posix_sock_flush /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/module/sock/posix/posix.c:1447 00:06:29.698 NEW_FUNC[2/2]: 0x1a9c3c0 in spdk_sock_flush /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/sock/sock.c:522 00:06:29.698 #8 NEW cov: 11954 ft: 12686 corp: 3/36b lim: 40 exec/s: 0 rss: 70Mb L: 27/27 MS: 4 CopyPart-ShuffleBytes-CrossOver-InsertRepeatedBytes- 00:06:29.698 [2024-05-16 20:05:16.584184] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a7b7b cdw11:7b7b7b7b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.698 [2024-05-16 20:05:16.584208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.698 #9 NEW cov: 11960 ft: 13069 corp: 4/45b lim: 40 exec/s: 0 rss: 70Mb L: 9/27 MS: 1 CrossOver- 00:06:29.698 [2024-05-16 20:05:16.624558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:e7e7e7e7 cdw11:e70ae7e7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.698 [2024-05-16 20:05:16.624581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.698 [2024-05-16 20:05:16.624635] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:e7e7e7e7 cdw11:e7e7e7e7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.698 [2024-05-16 20:05:16.624647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.698 [2024-05-16 20:05:16.624696] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:e7e7e7e7 cdw11:e7e7e70a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.698 [2024-05-16 20:05:16.624707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.698 #10 NEW cov: 12045 ft: 13373 corp: 5/72b lim: 40 exec/s: 0 rss: 70Mb L: 27/27 MS: 1 CrossOver- 00:06:29.698 [2024-05-16 20:05:16.674427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a7b7b cdw11:7b7b7b7b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.698 [2024-05-16 20:05:16.674449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.698 #11 NEW cov: 12045 ft: 13442 corp: 6/84b lim: 40 exec/s: 0 rss: 70Mb L: 12/27 MS: 1 CopyPart- 00:06:29.698 [2024-05-16 20:05:16.724956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a7b7b cdw11:7b7b0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.698 [2024-05-16 20:05:16.724980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.698 [2024-05-16 20:05:16.725049] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.698 [2024-05-16 20:05:16.725061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.698 [2024-05-16 20:05:16.725115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.698 [2024-05-16 20:05:16.725125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.698 [2024-05-16 20:05:16.725180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:0000007b cdw11:7b7b7b7b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.698 [2024-05-16 20:05:16.725191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:29.698 #12 NEW cov: 12045 ft: 13991 corp: 7/117b lim: 40 exec/s: 0 rss: 70Mb L: 33/33 MS: 1 InsertRepeatedBytes- 00:06:29.698 [2024-05-16 20:05:16.774994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:e7e7e7e7 cdw11:e70ae7e7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.698 [2024-05-16 20:05:16.775016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.698 [2024-05-16 20:05:16.775068] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:e7e7e7e7 cdw11:e7e7e7e7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.698 [2024-05-16 20:05:16.775079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.698 [2024-05-16 20:05:16.775131] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:e7e7e7e7 cdw11:e7e7e70a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.698 [2024-05-16 20:05:16.775141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.698 #13 NEW cov: 12045 ft: 14102 corp: 8/144b lim: 40 exec/s: 0 rss: 71Mb L: 27/33 MS: 1 ShuffleBytes- 00:06:29.698 [2024-05-16 20:05:16.825151] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a7b3e cdw11:3e3e3e3e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.698 [2024-05-16 20:05:16.825173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.698 [2024-05-16 20:05:16.825226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:3e3e3e3e cdw11:3e3e3e3e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.698 [2024-05-16 20:05:16.825237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.698 [2024-05-16 20:05:16.825290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:3e3e3e3e cdw11:3e7b7b7b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.698 [2024-05-16 20:05:16.825301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.957 #14 NEW cov: 12045 ft: 14134 corp: 9/171b lim: 40 exec/s: 0 rss: 71Mb L: 27/33 MS: 1 InsertRepeatedBytes- 00:06:29.957 [2024-05-16 20:05:16.865002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a7b7b cdw11:7b7b7b7b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.957 [2024-05-16 20:05:16.865024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.957 #15 NEW cov: 12045 ft: 14174 corp: 10/180b lim: 40 exec/s: 0 rss: 71Mb L: 9/33 MS: 1 ShuffleBytes- 00:06:29.957 [2024-05-16 20:05:16.905118] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a7b7b cdw11:7b7b097b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.957 [2024-05-16 20:05:16.905141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.957 #16 NEW cov: 12045 ft: 14313 corp: 11/189b lim: 40 exec/s: 0 rss: 71Mb L: 9/33 MS: 1 ChangeBinInt- 00:06:29.957 [2024-05-16 20:05:16.955251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a7b7b0a cdw11:7b7b7b09 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.957 [2024-05-16 20:05:16.955273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.957 #17 NEW cov: 12045 ft: 14331 corp: 12/198b lim: 40 exec/s: 0 rss: 71Mb L: 9/33 MS: 1 ShuffleBytes- 00:06:29.957 [2024-05-16 20:05:17.005398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a7b7b7b cdw11:7b7b7b7b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.957 [2024-05-16 20:05:17.005421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.957 #18 NEW cov: 12045 ft: 14348 corp: 13/206b lim: 40 exec/s: 0 rss: 71Mb L: 8/33 MS: 1 CopyPart- 00:06:29.957 [2024-05-16 20:05:17.055558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a7bbf7b cdw11:0a7b7b7b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.957 [2024-05-16 20:05:17.055582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.957 #19 NEW cov: 12045 ft: 14387 corp: 14/216b lim: 40 exec/s: 0 rss: 71Mb L: 10/33 MS: 1 InsertByte- 00:06:30.217 [2024-05-16 20:05:17.105707] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:000a0d0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.217 [2024-05-16 20:05:17.105732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.217 #23 NEW cov: 12045 ft: 14400 corp: 15/224b lim: 40 exec/s: 0 rss: 72Mb L: 8/33 MS: 4 CopyPart-CopyPart-ChangeBinInt-CMP- DE: "\000\000\000\000"- 00:06:30.217 [2024-05-16 20:05:17.145830] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a7b7b7b cdw11:7b7b2c7b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.217 [2024-05-16 20:05:17.145853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.217 #24 NEW cov: 12045 ft: 14403 corp: 16/233b lim: 40 exec/s: 0 rss: 72Mb L: 9/33 MS: 1 InsertByte- 00:06:30.217 [2024-05-16 20:05:17.195955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a7b0a7b cdw11:7b7b7b7b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.217 [2024-05-16 20:05:17.195978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.217 #25 NEW cov: 12045 ft: 14405 corp: 17/242b lim: 40 exec/s: 0 rss: 72Mb L: 9/33 MS: 1 ShuffleBytes- 00:06:30.217 [2024-05-16 20:05:17.236410] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a7b7b cdw11:7b7b0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.217 [2024-05-16 20:05:17.236433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.217 [2024-05-16 20:05:17.236503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.217 [2024-05-16 20:05:17.236515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.217 [2024-05-16 20:05:17.236567] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.217 [2024-05-16 20:05:17.236581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.217 [2024-05-16 20:05:17.236635] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:000000e2 cdw11:7b7b7b7b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.217 [2024-05-16 20:05:17.236646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:30.217 NEW_FUNC[1/1]: 0x1a6ef60 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:30.217 #26 NEW cov: 12068 ft: 14442 corp: 18/276b lim: 40 exec/s: 0 rss: 72Mb L: 34/34 MS: 1 InsertByte- 00:06:30.217 [2024-05-16 20:05:17.286219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a7b7b cdw11:7b7b7b7b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.217 [2024-05-16 20:05:17.286243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.217 #27 NEW cov: 12068 ft: 14463 corp: 19/285b lim: 40 exec/s: 0 rss: 72Mb L: 9/34 MS: 1 ShuffleBytes- 00:06:30.217 [2024-05-16 20:05:17.326321] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a7b7b cdw11:7b7b097b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.217 [2024-05-16 20:05:17.326344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.217 #28 NEW cov: 12068 ft: 14491 corp: 20/294b lim: 40 exec/s: 0 rss: 72Mb L: 9/34 MS: 1 ShuffleBytes- 00:06:30.477 [2024-05-16 20:05:17.366416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a7b7b7b cdw11:0a7b0a7b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.477 [2024-05-16 20:05:17.366439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.477 #29 NEW cov: 12068 ft: 14515 corp: 21/303b lim: 40 exec/s: 29 rss: 72Mb L: 9/34 MS: 1 CrossOver- 00:06:30.477 [2024-05-16 20:05:17.416994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a7b7b cdw11:7b7b0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.477 [2024-05-16 20:05:17.417015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.477 [2024-05-16 20:05:17.417084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.477 [2024-05-16 20:05:17.417094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.477 [2024-05-16 20:05:17.417146] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.477 [2024-05-16 20:05:17.417157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.477 [2024-05-16 20:05:17.417210] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:0000007b cdw11:7b7b7b7b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.477 [2024-05-16 20:05:17.417220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:30.477 #30 NEW cov: 12068 ft: 14555 corp: 22/338b lim: 40 exec/s: 30 rss: 72Mb L: 35/35 MS: 1 CrossOver- 00:06:30.477 [2024-05-16 20:05:17.456952] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:e7e7e7e7 cdw11:e70ae7e7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.477 [2024-05-16 20:05:17.456974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.477 [2024-05-16 20:05:17.457026] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:e7e7e7e7 cdw11:e7e7e7e7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.477 [2024-05-16 20:05:17.457039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.477 [2024-05-16 20:05:17.457092] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:e7e7e7e7 cdw11:e7e7e70a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.477 [2024-05-16 20:05:17.457102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.477 #31 NEW cov: 12068 ft: 14567 corp: 23/365b lim: 40 exec/s: 31 rss: 72Mb L: 27/35 MS: 1 ChangeByte- 00:06:30.477 [2024-05-16 20:05:17.507213] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a7b7b cdw11:7b7b0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.477 [2024-05-16 20:05:17.507235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.477 [2024-05-16 20:05:17.507305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.477 [2024-05-16 20:05:17.507316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.477 [2024-05-16 20:05:17.507368] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:0000007b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.477 [2024-05-16 20:05:17.507378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.477 [2024-05-16 20:05:17.507431] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:7b7b7b7b cdw11:7b7b7b7b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.477 [2024-05-16 20:05:17.507442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:30.477 #32 NEW cov: 12068 ft: 14580 corp: 24/400b lim: 40 exec/s: 32 rss: 72Mb L: 35/35 MS: 1 CopyPart- 00:06:30.477 [2024-05-16 20:05:17.557246] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:e7e7e7e7 cdw11:e77b7b00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.477 [2024-05-16 20:05:17.557268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.477 [2024-05-16 20:05:17.557322] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.477 [2024-05-16 20:05:17.557333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.477 [2024-05-16 20:05:17.557382] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.477 [2024-05-16 20:05:17.557392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.477 #33 NEW cov: 12068 ft: 14610 corp: 25/427b lim: 40 exec/s: 33 rss: 72Mb L: 27/35 MS: 1 CrossOver- 00:06:30.477 [2024-05-16 20:05:17.607397] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:e7e7e7e7 cdw11:e7e7e7e7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.477 [2024-05-16 20:05:17.607419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.477 [2024-05-16 20:05:17.607494] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:e7e7e7e7 cdw11:e7e7e7e7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.477 [2024-05-16 20:05:17.607506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.477 [2024-05-16 20:05:17.607560] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:e7e7e7e7 cdw11:e7e7e70a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.477 [2024-05-16 20:05:17.607571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.737 #34 NEW cov: 12068 ft: 14612 corp: 26/454b lim: 40 exec/s: 34 rss: 72Mb L: 27/35 MS: 1 CrossOver- 00:06:30.737 [2024-05-16 20:05:17.647478] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:e7e7e7e7 cdw11:e70ae7e7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.737 [2024-05-16 20:05:17.647500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.737 [2024-05-16 20:05:17.647568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:e7e7e700 cdw11:000000e7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.737 [2024-05-16 20:05:17.647580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.737 [2024-05-16 20:05:17.647643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:e7e7e7e7 cdw11:e7e7e70a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.737 [2024-05-16 20:05:17.647654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.737 #35 NEW cov: 12068 ft: 14628 corp: 27/481b lim: 40 exec/s: 35 rss: 72Mb L: 27/35 MS: 1 PersAutoDict- DE: "\000\000\000\000"- 00:06:30.737 [2024-05-16 20:05:17.687612] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a7b3e cdw11:3e3e3e3e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.737 [2024-05-16 20:05:17.687633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.737 [2024-05-16 20:05:17.687702] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:3e3e3e3e cdw11:3e3e3e3e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.737 [2024-05-16 20:05:17.687713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.737 [2024-05-16 20:05:17.687765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:3e3e7b7b cdw11:7b3e3e7b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.737 [2024-05-16 20:05:17.687776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.737 #36 NEW cov: 12068 ft: 14683 corp: 28/508b lim: 40 exec/s: 36 rss: 72Mb L: 27/35 MS: 1 ShuffleBytes- 00:06:30.737 [2024-05-16 20:05:17.737538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:01040000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.737 [2024-05-16 20:05:17.737561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.737 #39 NEW cov: 12068 ft: 14689 corp: 29/522b lim: 40 exec/s: 39 rss: 72Mb L: 14/35 MS: 3 EraseBytes-ChangeBit-CMP- DE: "\001\004\000\000\000\000\000\000"- 00:06:30.737 [2024-05-16 20:05:17.777752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a262626 cdw11:26262626 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.737 [2024-05-16 20:05:17.777773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.737 [2024-05-16 20:05:17.777844] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:26262626 cdw11:26262626 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.737 [2024-05-16 20:05:17.777855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.737 #41 NEW cov: 12068 ft: 14880 corp: 30/544b lim: 40 exec/s: 41 rss: 72Mb L: 22/35 MS: 2 CopyPart-InsertRepeatedBytes- 00:06:30.737 [2024-05-16 20:05:17.817766] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0ae7e7e7 cdw11:e77b7b09 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.737 [2024-05-16 20:05:17.817788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.737 #42 NEW cov: 12068 ft: 14932 corp: 31/553b lim: 40 exec/s: 42 rss: 72Mb L: 9/35 MS: 1 CrossOver- 00:06:30.737 [2024-05-16 20:05:17.858168] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:e7e7e7e7 cdw11:e70ae7e7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.737 [2024-05-16 20:05:17.858190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.737 [2024-05-16 20:05:17.858258] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:e7e7e7e7 cdw11:e7e7e700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.737 [2024-05-16 20:05:17.858269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.737 [2024-05-16 20:05:17.858322] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:000000e7 cdw11:e7e7e70a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.737 [2024-05-16 20:05:17.858333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.737 #43 NEW cov: 12068 ft: 14940 corp: 32/580b lim: 40 exec/s: 43 rss: 72Mb L: 27/35 MS: 1 PersAutoDict- DE: "\000\000\000\000"- 00:06:30.996 [2024-05-16 20:05:17.898364] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:e7e7e7e7 cdw11:e70ae7e7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.996 [2024-05-16 20:05:17.898385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.996 [2024-05-16 20:05:17.898453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:e7e7e70a cdw11:e7e7e7e7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.996 [2024-05-16 20:05:17.898470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.996 [2024-05-16 20:05:17.898521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:e7e7e7e7 cdw11:e7e70000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.996 [2024-05-16 20:05:17.898531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.996 [2024-05-16 20:05:17.898582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:0000e7e7 cdw11:e7e70a7b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.996 [2024-05-16 20:05:17.898592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:30.996 #44 NEW cov: 12068 ft: 14965 corp: 33/614b lim: 40 exec/s: 44 rss: 72Mb L: 34/35 MS: 1 CopyPart- 00:06:30.996 [2024-05-16 20:05:17.948362] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a7b7b cdw11:7b7b0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.996 [2024-05-16 20:05:17.948384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.996 [2024-05-16 20:05:17.948436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.996 [2024-05-16 20:05:17.948446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.996 [2024-05-16 20:05:17.948517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:007b7b7b cdw11:7b7b7b7b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.996 [2024-05-16 20:05:17.948529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.996 #45 NEW cov: 12068 ft: 14980 corp: 34/639b lim: 40 exec/s: 45 rss: 72Mb L: 25/35 MS: 1 EraseBytes- 00:06:30.997 [2024-05-16 20:05:17.988468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a0a0a cdw11:7b7b7b7b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.997 [2024-05-16 20:05:17.988490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.997 [2024-05-16 20:05:17.988544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:7b7b7b3e cdw11:3e3e3e3e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.997 [2024-05-16 20:05:17.988555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.997 [2024-05-16 20:05:17.988607] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:3e3e3e3e cdw11:3e7b7b7b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.997 [2024-05-16 20:05:17.988617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.997 #46 NEW cov: 12068 ft: 15031 corp: 35/666b lim: 40 exec/s: 46 rss: 72Mb L: 27/35 MS: 1 CrossOver- 00:06:30.997 [2024-05-16 20:05:18.028625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a0a7b cdw11:7b7b7b00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.997 [2024-05-16 20:05:18.028646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.997 [2024-05-16 20:05:18.028715] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0000007b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.997 [2024-05-16 20:05:18.028726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.997 [2024-05-16 20:05:18.028779] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:0a7b7b7b cdw11:7b7b0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.997 [2024-05-16 20:05:18.028789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.997 #47 NEW cov: 12068 ft: 15045 corp: 36/695b lim: 40 exec/s: 47 rss: 72Mb L: 29/35 MS: 1 CrossOver- 00:06:30.997 [2024-05-16 20:05:18.078471] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a7b7b cdw11:7b7b0a7b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.997 [2024-05-16 20:05:18.078492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.997 #48 NEW cov: 12068 ft: 15063 corp: 37/704b lim: 40 exec/s: 48 rss: 72Mb L: 9/35 MS: 1 CopyPart- 00:06:30.997 [2024-05-16 20:05:18.118615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a7b7b cdw11:7b7b7b7b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.997 [2024-05-16 20:05:18.118636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.997 #49 NEW cov: 12068 ft: 15075 corp: 38/717b lim: 40 exec/s: 49 rss: 73Mb L: 13/35 MS: 1 InsertByte- 00:06:31.256 [2024-05-16 20:05:18.158719] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a817b cdw11:7b7b0a7b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.256 [2024-05-16 20:05:18.158742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.256 #50 NEW cov: 12068 ft: 15076 corp: 39/726b lim: 40 exec/s: 50 rss: 73Mb L: 9/35 MS: 1 ChangeBinInt- 00:06:31.256 [2024-05-16 20:05:18.209245] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.256 [2024-05-16 20:05:18.209267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.256 [2024-05-16 20:05:18.209340] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.256 [2024-05-16 20:05:18.209351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.256 [2024-05-16 20:05:18.209404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.256 [2024-05-16 20:05:18.209415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.256 [2024-05-16 20:05:18.209469] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.256 [2024-05-16 20:05:18.209480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:31.256 #51 NEW cov: 12068 ft: 15114 corp: 40/765b lim: 40 exec/s: 51 rss: 73Mb L: 39/39 MS: 1 InsertRepeatedBytes- 00:06:31.256 [2024-05-16 20:05:18.259264] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a7b7b cdw11:7b7b0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.256 [2024-05-16 20:05:18.259287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.256 [2024-05-16 20:05:18.259357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:000000cc cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.256 [2024-05-16 20:05:18.259368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.256 [2024-05-16 20:05:18.259421] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:007b7b7b cdw11:7b7b7b7b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.256 [2024-05-16 20:05:18.259431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.257 #52 NEW cov: 12068 ft: 15150 corp: 41/790b lim: 40 exec/s: 52 rss: 73Mb L: 25/39 MS: 1 ChangeByte- 00:06:31.257 [2024-05-16 20:05:18.309403] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:e7e7e7e7 cdw11:e77b7b00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.257 [2024-05-16 20:05:18.309425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.257 [2024-05-16 20:05:18.309496] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.257 [2024-05-16 20:05:18.309508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.257 [2024-05-16 20:05:18.309562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.257 [2024-05-16 20:05:18.309572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.257 #53 NEW cov: 12068 ft: 15159 corp: 42/821b lim: 40 exec/s: 53 rss: 73Mb L: 31/39 MS: 1 PersAutoDict- DE: "\000\000\000\000"- 00:06:31.257 [2024-05-16 20:05:18.359694] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a7b7b cdw11:7b7b0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.257 [2024-05-16 20:05:18.359715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.257 [2024-05-16 20:05:18.359784] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.257 [2024-05-16 20:05:18.359800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.257 [2024-05-16 20:05:18.359856] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.257 [2024-05-16 20:05:18.359868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.257 [2024-05-16 20:05:18.359920] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:0000007b cdw11:727b7b7b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.257 [2024-05-16 20:05:18.359931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:31.257 #54 NEW cov: 12068 ft: 15170 corp: 43/854b lim: 40 exec/s: 27 rss: 73Mb L: 33/39 MS: 1 ChangeBinInt- 00:06:31.257 #54 DONE cov: 12068 ft: 15170 corp: 43/854b lim: 40 exec/s: 27 rss: 73Mb 00:06:31.257 ###### Recommended dictionary. ###### 00:06:31.257 "\000\000\000\000" # Uses: 3 00:06:31.257 "\001\004\000\000\000\000\000\000" # Uses: 0 00:06:31.257 ###### End of recommended dictionary. ###### 00:06:31.257 Done 54 runs in 2 second(s) 00:06:31.257 [2024-05-16 20:05:18.381355] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:31.517 20:05:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_10.conf /var/tmp/suppress_nvmf_fuzz 00:06:31.517 20:05:18 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:31.517 20:05:18 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:31.517 20:05:18 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 11 1 0x1 00:06:31.517 20:05:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=11 00:06:31.517 20:05:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:31.517 20:05:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:31.517 20:05:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:06:31.517 20:05:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_11.conf 00:06:31.517 20:05:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:31.517 20:05:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:31.517 20:05:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 11 00:06:31.517 20:05:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4411 00:06:31.517 20:05:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:06:31.517 20:05:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' 00:06:31.517 20:05:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4411"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:31.517 20:05:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:31.517 20:05:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:31.517 20:05:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' -c /tmp/fuzz_json_11.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 -Z 11 00:06:31.517 [2024-05-16 20:05:18.552980] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:31.517 [2024-05-16 20:05:18.553061] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1667578 ] 00:06:31.517 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.776 [2024-05-16 20:05:18.724880] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.776 [2024-05-16 20:05:18.790241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.776 [2024-05-16 20:05:18.848747] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:31.776 [2024-05-16 20:05:18.864716] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:31.776 [2024-05-16 20:05:18.865044] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4411 *** 00:06:31.776 INFO: Running with entropic power schedule (0xFF, 100). 00:06:31.776 INFO: Seed: 3743850800 00:06:31.776 INFO: Loaded 1 modules (357283 inline 8-bit counters): 357283 [0x299c0cc, 0x29f346f), 00:06:31.776 INFO: Loaded 1 PC tables (357283 PCs): 357283 [0x29f3470,0x2f66ea0), 00:06:31.776 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:06:31.776 INFO: A corpus is not provided, starting from an empty corpus 00:06:31.776 #2 INITED exec/s: 0 rss: 63Mb 00:06:31.776 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:31.776 This may also happen if the target rejected all inputs we tried so far 00:06:31.776 [2024-05-16 20:05:18.910895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0ac1c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.776 [2024-05-16 20:05:18.910921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.776 [2024-05-16 20:05:18.910995] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:c1c1c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.776 [2024-05-16 20:05:18.911007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.776 [2024-05-16 20:05:18.911064] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:c1c1c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.776 [2024-05-16 20:05:18.911076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.776 [2024-05-16 20:05:18.911131] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:c1c1c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.776 [2024-05-16 20:05:18.911141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.035 NEW_FUNC[1/686]: 0x491900 in fuzz_admin_security_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:223 00:06:32.035 NEW_FUNC[2/686]: 0x4bf420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:32.035 #8 NEW cov: 11836 ft: 11837 corp: 2/34b lim: 40 exec/s: 0 rss: 71Mb L: 33/33 MS: 1 InsertRepeatedBytes- 00:06:32.035 [2024-05-16 20:05:19.060869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:95959595 cdw11:95959595 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.035 [2024-05-16 20:05:19.060899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.035 [2024-05-16 20:05:19.060974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:95959595 cdw11:95959595 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.035 [2024-05-16 20:05:19.060986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.035 #9 NEW cov: 11966 ft: 12752 corp: 3/54b lim: 40 exec/s: 0 rss: 71Mb L: 20/33 MS: 1 InsertRepeatedBytes- 00:06:32.035 [2024-05-16 20:05:19.101291] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0ac1c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.035 [2024-05-16 20:05:19.101314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.035 [2024-05-16 20:05:19.101373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:c1c1c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.035 [2024-05-16 20:05:19.101384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.035 [2024-05-16 20:05:19.101459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:c1c1c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.035 [2024-05-16 20:05:19.101470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.035 [2024-05-16 20:05:19.101528] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:c1c1c9c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.035 [2024-05-16 20:05:19.101539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.035 #10 NEW cov: 11972 ft: 12989 corp: 4/87b lim: 40 exec/s: 0 rss: 72Mb L: 33/33 MS: 1 ChangeBit- 00:06:32.035 [2024-05-16 20:05:19.151460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0ac1c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.035 [2024-05-16 20:05:19.151482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.035 [2024-05-16 20:05:19.151554] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:c1c1c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.035 [2024-05-16 20:05:19.151565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.035 [2024-05-16 20:05:19.151619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:c1c1c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.035 [2024-05-16 20:05:19.151630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.035 [2024-05-16 20:05:19.151685] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:c1c1c9c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.035 [2024-05-16 20:05:19.151696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.295 #16 NEW cov: 12057 ft: 13242 corp: 5/120b lim: 40 exec/s: 0 rss: 72Mb L: 33/33 MS: 1 ShuffleBytes- 00:06:32.295 [2024-05-16 20:05:19.201418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:95959595 cdw11:95959595 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.295 [2024-05-16 20:05:19.201440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.295 [2024-05-16 20:05:19.201520] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:95959595 cdw11:95959595 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.295 [2024-05-16 20:05:19.201533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.295 [2024-05-16 20:05:19.201591] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:95959595 cdw11:95959595 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.295 [2024-05-16 20:05:19.201601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.295 #17 NEW cov: 12057 ft: 13622 corp: 6/149b lim: 40 exec/s: 0 rss: 72Mb L: 29/33 MS: 1 CrossOver- 00:06:32.295 [2024-05-16 20:05:19.251385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0ac1c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.295 [2024-05-16 20:05:19.251408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.295 [2024-05-16 20:05:19.251488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:c1c1c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.295 [2024-05-16 20:05:19.251501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.295 #18 NEW cov: 12057 ft: 13746 corp: 7/172b lim: 40 exec/s: 0 rss: 72Mb L: 23/33 MS: 1 EraseBytes- 00:06:32.295 [2024-05-16 20:05:19.291728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0ac1c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.295 [2024-05-16 20:05:19.291751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.295 [2024-05-16 20:05:19.291826] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:c1c1c1c1 cdw11:c1c1c123 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.295 [2024-05-16 20:05:19.291837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.295 [2024-05-16 20:05:19.291896] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:c1c1c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.295 [2024-05-16 20:05:19.291907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.295 #19 NEW cov: 12057 ft: 13808 corp: 8/196b lim: 40 exec/s: 0 rss: 72Mb L: 24/33 MS: 1 InsertByte- 00:06:32.295 [2024-05-16 20:05:19.342014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0ac1c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.295 [2024-05-16 20:05:19.342036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.295 [2024-05-16 20:05:19.342095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:c1c1c9c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.295 [2024-05-16 20:05:19.342106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.295 [2024-05-16 20:05:19.342166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:c1c1c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.295 [2024-05-16 20:05:19.342177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.295 [2024-05-16 20:05:19.342234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:c1c1c9c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.295 [2024-05-16 20:05:19.342245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.295 #20 NEW cov: 12057 ft: 13879 corp: 9/229b lim: 40 exec/s: 0 rss: 72Mb L: 33/33 MS: 1 CrossOver- 00:06:32.295 [2024-05-16 20:05:19.381570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a4f4f4f cdw11:4f4f4f4f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.295 [2024-05-16 20:05:19.381591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.295 #21 NEW cov: 12057 ft: 14661 corp: 10/241b lim: 40 exec/s: 0 rss: 72Mb L: 12/33 MS: 1 InsertRepeatedBytes- 00:06:32.295 [2024-05-16 20:05:19.422198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:4f4f4f4f cdw11:4f4f4f4f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.295 [2024-05-16 20:05:19.422220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.295 [2024-05-16 20:05:19.422278] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:4f0ac1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.295 [2024-05-16 20:05:19.422292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.295 [2024-05-16 20:05:19.422346] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:c1c1c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.295 [2024-05-16 20:05:19.422357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.295 [2024-05-16 20:05:19.422411] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:23c1c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.295 [2024-05-16 20:05:19.422422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.554 #22 NEW cov: 12057 ft: 14726 corp: 11/274b lim: 40 exec/s: 0 rss: 72Mb L: 33/33 MS: 1 CrossOver- 00:06:32.554 [2024-05-16 20:05:19.472355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0ac1c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.554 [2024-05-16 20:05:19.472377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.554 [2024-05-16 20:05:19.472435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:c1c1c9c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.554 [2024-05-16 20:05:19.472446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.554 [2024-05-16 20:05:19.472523] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:c1c1c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.554 [2024-05-16 20:05:19.472534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.554 [2024-05-16 20:05:19.472592] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:c1c1c9c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.554 [2024-05-16 20:05:19.472602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.554 #23 NEW cov: 12057 ft: 14752 corp: 12/307b lim: 40 exec/s: 0 rss: 72Mb L: 33/33 MS: 1 ShuffleBytes- 00:06:32.554 [2024-05-16 20:05:19.522638] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0ac1c1c1 cdw11:c1c1c119 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.554 [2024-05-16 20:05:19.522659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.554 [2024-05-16 20:05:19.522734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:6ca0585e cdw11:a50600c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.554 [2024-05-16 20:05:19.522746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.554 [2024-05-16 20:05:19.522803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:c1c1c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.554 [2024-05-16 20:05:19.522814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.554 [2024-05-16 20:05:19.522870] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:c1c1c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.554 [2024-05-16 20:05:19.522880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.554 #24 NEW cov: 12057 ft: 14772 corp: 13/340b lim: 40 exec/s: 0 rss: 72Mb L: 33/33 MS: 1 CMP- DE: "\031l\240X^\245\006\000"- 00:06:32.555 [2024-05-16 20:05:19.562738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0ac1c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.555 [2024-05-16 20:05:19.562762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.555 [2024-05-16 20:05:19.562840] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:c1c1c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.555 [2024-05-16 20:05:19.562851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.555 [2024-05-16 20:05:19.562908] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:c1c1c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.555 [2024-05-16 20:05:19.562918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.555 [2024-05-16 20:05:19.562975] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:c1c9c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.555 [2024-05-16 20:05:19.562986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.555 #25 NEW cov: 12057 ft: 14791 corp: 14/372b lim: 40 exec/s: 0 rss: 72Mb L: 32/33 MS: 1 EraseBytes- 00:06:32.555 [2024-05-16 20:05:19.612901] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:4f4f4f4f cdw11:4f304f4f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.555 [2024-05-16 20:05:19.612923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.555 [2024-05-16 20:05:19.612994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:4f0ac1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.555 [2024-05-16 20:05:19.613006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.555 [2024-05-16 20:05:19.613063] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:c1c1c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.555 [2024-05-16 20:05:19.613074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.555 [2024-05-16 20:05:19.613132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:23c1c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.555 [2024-05-16 20:05:19.613142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.555 #26 NEW cov: 12057 ft: 14809 corp: 15/405b lim: 40 exec/s: 0 rss: 72Mb L: 33/33 MS: 1 ChangeByte- 00:06:32.555 [2024-05-16 20:05:19.663052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0ac1c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.555 [2024-05-16 20:05:19.663074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.555 [2024-05-16 20:05:19.663147] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:c1c1c9c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.555 [2024-05-16 20:05:19.663158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.555 [2024-05-16 20:05:19.663214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:c1c5c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.555 [2024-05-16 20:05:19.663224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.555 [2024-05-16 20:05:19.663281] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:c1c1c9c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.555 [2024-05-16 20:05:19.663295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.555 #27 NEW cov: 12057 ft: 14815 corp: 16/438b lim: 40 exec/s: 0 rss: 72Mb L: 33/33 MS: 1 ChangeBit- 00:06:32.814 [2024-05-16 20:05:19.703016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0ac1c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.814 [2024-05-16 20:05:19.703038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.814 [2024-05-16 20:05:19.703096] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:c5c1c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.814 [2024-05-16 20:05:19.703107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.814 [2024-05-16 20:05:19.703166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:c1c9c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.814 [2024-05-16 20:05:19.703177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.814 #28 NEW cov: 12057 ft: 14840 corp: 17/462b lim: 40 exec/s: 0 rss: 72Mb L: 24/33 MS: 1 EraseBytes- 00:06:32.814 [2024-05-16 20:05:19.753133] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0ac1c1c1 cdw11:c1393e3e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.814 [2024-05-16 20:05:19.753157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.814 [2024-05-16 20:05:19.753215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:3ec1c1c1 cdw11:c1c1c123 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.814 [2024-05-16 20:05:19.753227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.814 [2024-05-16 20:05:19.753280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:c1c1c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.814 [2024-05-16 20:05:19.753290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.814 #29 NEW cov: 12057 ft: 14850 corp: 18/486b lim: 40 exec/s: 0 rss: 72Mb L: 24/33 MS: 1 ChangeBinInt- 00:06:32.814 [2024-05-16 20:05:19.793404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0ac1c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.814 [2024-05-16 20:05:19.793427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.814 [2024-05-16 20:05:19.793491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:c1c1c9c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.814 [2024-05-16 20:05:19.793503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.814 [2024-05-16 20:05:19.793559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:c1c1c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.814 [2024-05-16 20:05:19.793570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.814 [2024-05-16 20:05:19.793629] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:c1c1c9ff cdw11:ffffffc1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.814 [2024-05-16 20:05:19.793639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.814 NEW_FUNC[1/1]: 0x1a6ef60 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:32.814 #30 NEW cov: 12080 ft: 14879 corp: 19/523b lim: 40 exec/s: 0 rss: 73Mb L: 37/37 MS: 1 CMP- DE: "\377\377\377\377"- 00:06:32.814 [2024-05-16 20:05:19.843551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0ac1c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.814 [2024-05-16 20:05:19.843573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.814 [2024-05-16 20:05:19.843649] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:c1c1c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.814 [2024-05-16 20:05:19.843661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.814 [2024-05-16 20:05:19.843729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:c1c1c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.814 [2024-05-16 20:05:19.843740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.814 [2024-05-16 20:05:19.843796] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:c1c1c945 cdw11:3ec1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.814 [2024-05-16 20:05:19.843806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.814 #31 NEW cov: 12080 ft: 14886 corp: 20/556b lim: 40 exec/s: 0 rss: 73Mb L: 33/37 MS: 1 ChangeBinInt- 00:06:32.814 [2024-05-16 20:05:19.883676] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a2fc1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.814 [2024-05-16 20:05:19.883697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.814 [2024-05-16 20:05:19.883773] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:c1c1c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.814 [2024-05-16 20:05:19.883785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.814 [2024-05-16 20:05:19.883842] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:c1c1c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.814 [2024-05-16 20:05:19.883853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.814 [2024-05-16 20:05:19.883909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:c1c1c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.814 [2024-05-16 20:05:19.883920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.814 #32 NEW cov: 12080 ft: 14899 corp: 21/589b lim: 40 exec/s: 32 rss: 73Mb L: 33/37 MS: 1 ChangeByte- 00:06:32.814 [2024-05-16 20:05:19.923779] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0ac1c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.814 [2024-05-16 20:05:19.923801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.814 [2024-05-16 20:05:19.923862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:c1c1c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.814 [2024-05-16 20:05:19.923873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.814 [2024-05-16 20:05:19.923946] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:b8c1c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.814 [2024-05-16 20:05:19.923957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.814 [2024-05-16 20:05:19.924012] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:c1c1c9c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.814 [2024-05-16 20:05:19.924025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.814 #33 NEW cov: 12080 ft: 14912 corp: 22/622b lim: 40 exec/s: 33 rss: 73Mb L: 33/37 MS: 1 ChangeBinInt- 00:06:33.074 [2024-05-16 20:05:19.963732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0ac1c1c1 cdw11:c1393e3e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.074 [2024-05-16 20:05:19.963757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.074 [2024-05-16 20:05:19.963816] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:3ec1c1c1 cdw11:c1c1c123 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.074 [2024-05-16 20:05:19.963827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.074 [2024-05-16 20:05:19.963883] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:c1c1c1c0 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.074 [2024-05-16 20:05:19.963894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.074 #34 NEW cov: 12080 ft: 14933 corp: 23/646b lim: 40 exec/s: 34 rss: 73Mb L: 24/37 MS: 1 ChangeBit- 00:06:33.074 [2024-05-16 20:05:20.014145] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:4f4f4f4f cdw11:4f304f4f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.074 [2024-05-16 20:05:20.014248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.074 [2024-05-16 20:05:20.014339] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:4f0ac1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.074 [2024-05-16 20:05:20.014356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.075 [2024-05-16 20:05:20.014427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:c1c1c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.075 [2024-05-16 20:05:20.014441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.075 [2024-05-16 20:05:20.014510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:23c1c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.075 [2024-05-16 20:05:20.014524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.075 #35 NEW cov: 12080 ft: 14977 corp: 24/679b lim: 40 exec/s: 35 rss: 73Mb L: 33/37 MS: 1 ChangeByte- 00:06:33.075 [2024-05-16 20:05:20.064232] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0ac1c1c1 cdw11:c13bc1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.075 [2024-05-16 20:05:20.064259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.075 [2024-05-16 20:05:20.064317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:c1c1c1c9 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.075 [2024-05-16 20:05:20.064329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.075 [2024-05-16 20:05:20.064385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:c1c1c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.075 [2024-05-16 20:05:20.064396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.075 [2024-05-16 20:05:20.064452] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:c1c1c1c9 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.075 [2024-05-16 20:05:20.064474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.075 #36 NEW cov: 12080 ft: 14994 corp: 25/713b lim: 40 exec/s: 36 rss: 73Mb L: 34/37 MS: 1 InsertByte- 00:06:33.075 [2024-05-16 20:05:20.104300] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a2fc1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.075 [2024-05-16 20:05:20.104325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.075 [2024-05-16 20:05:20.104401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:c1c1c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.075 [2024-05-16 20:05:20.104412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.075 [2024-05-16 20:05:20.104469] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:c1c1c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.075 [2024-05-16 20:05:20.104480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.075 [2024-05-16 20:05:20.104537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:c1c10000 cdw11:000000c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.075 [2024-05-16 20:05:20.104547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.075 #37 NEW cov: 12080 ft: 15007 corp: 26/751b lim: 40 exec/s: 37 rss: 73Mb L: 38/38 MS: 1 InsertRepeatedBytes- 00:06:33.075 [2024-05-16 20:05:20.154471] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a2fc1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.075 [2024-05-16 20:05:20.154494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.075 [2024-05-16 20:05:20.154555] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:c1c1c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.075 [2024-05-16 20:05:20.154566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.075 [2024-05-16 20:05:20.154625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:c1c1c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.075 [2024-05-16 20:05:20.154635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.075 [2024-05-16 20:05:20.154692] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:c1c1c1c1 cdw11:c1c1c9c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.075 [2024-05-16 20:05:20.154703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.075 #38 NEW cov: 12080 ft: 15018 corp: 27/784b lim: 40 exec/s: 38 rss: 73Mb L: 33/38 MS: 1 ChangeBit- 00:06:33.075 [2024-05-16 20:05:20.194568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a2fc1c1 cdw11:c1000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.075 [2024-05-16 20:05:20.194589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.075 [2024-05-16 20:05:20.194649] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.075 [2024-05-16 20:05:20.194660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.075 [2024-05-16 20:05:20.194717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:c1c1c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.075 [2024-05-16 20:05:20.194731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.075 [2024-05-16 20:05:20.194789] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:c1c10000 cdw11:000000c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.075 [2024-05-16 20:05:20.194800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.335 #39 NEW cov: 12080 ft: 15066 corp: 28/822b lim: 40 exec/s: 39 rss: 73Mb L: 38/38 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\000"- 00:06:33.335 [2024-05-16 20:05:20.244761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:4f4f4f4f cdw11:4f4f4f4f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.335 [2024-05-16 20:05:20.244782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.335 [2024-05-16 20:05:20.244839] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:4f0ac1c1 cdw11:31c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.335 [2024-05-16 20:05:20.244850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.335 [2024-05-16 20:05:20.244907] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:c1c1c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.335 [2024-05-16 20:05:20.244918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.335 [2024-05-16 20:05:20.244974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:c123c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.335 [2024-05-16 20:05:20.244985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.335 #40 NEW cov: 12080 ft: 15077 corp: 29/856b lim: 40 exec/s: 40 rss: 73Mb L: 34/38 MS: 1 InsertByte- 00:06:33.335 [2024-05-16 20:05:20.284814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:4f4f4f4f cdw11:4f304f4f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.335 [2024-05-16 20:05:20.284837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.335 [2024-05-16 20:05:20.284897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:4f0ac1c1 cdw11:c1c1c1e1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.336 [2024-05-16 20:05:20.284909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.336 [2024-05-16 20:05:20.284966] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:c1c1c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.336 [2024-05-16 20:05:20.284976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.336 [2024-05-16 20:05:20.285031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:23c1c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.336 [2024-05-16 20:05:20.285041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.336 #41 NEW cov: 12080 ft: 15090 corp: 30/889b lim: 40 exec/s: 41 rss: 74Mb L: 33/38 MS: 1 ChangeBit- 00:06:33.336 [2024-05-16 20:05:20.334939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:4f4f4f4f cdw11:4f4f4f4f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.336 [2024-05-16 20:05:20.334961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.336 [2024-05-16 20:05:20.335024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:4f0ac1c1 cdw11:31c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.336 [2024-05-16 20:05:20.335035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.336 [2024-05-16 20:05:20.335090] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:c1c1c1ff cdw11:ffffffc1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.336 [2024-05-16 20:05:20.335101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.336 [2024-05-16 20:05:20.335156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:c1c1c1c1 cdw11:c123c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.336 [2024-05-16 20:05:20.335167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.336 #42 NEW cov: 12080 ft: 15101 corp: 31/927b lim: 40 exec/s: 42 rss: 74Mb L: 38/38 MS: 1 InsertRepeatedBytes- 00:06:33.336 [2024-05-16 20:05:20.384762] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.336 [2024-05-16 20:05:20.384784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.336 [2024-05-16 20:05:20.384841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:0a4f4f4f cdw11:4f4f4f4f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.336 [2024-05-16 20:05:20.384852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.336 #43 NEW cov: 12080 ft: 15103 corp: 32/947b lim: 40 exec/s: 43 rss: 74Mb L: 20/38 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:06:33.336 [2024-05-16 20:05:20.435225] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:c10ac1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.336 [2024-05-16 20:05:20.435247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.336 [2024-05-16 20:05:20.435319] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:c1c1c9c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.336 [2024-05-16 20:05:20.435331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.336 [2024-05-16 20:05:20.435389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:c1c5c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.336 [2024-05-16 20:05:20.435399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.336 [2024-05-16 20:05:20.435458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:c1c1c9c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.336 [2024-05-16 20:05:20.435469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.336 #44 NEW cov: 12080 ft: 15116 corp: 33/980b lim: 40 exec/s: 44 rss: 74Mb L: 33/38 MS: 1 ShuffleBytes- 00:06:33.336 [2024-05-16 20:05:20.475374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0ac1c1c1 cdw11:c1c1c119 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.336 [2024-05-16 20:05:20.475396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.336 [2024-05-16 20:05:20.475459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:9461585e cdw11:a50600c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.336 [2024-05-16 20:05:20.475470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.336 [2024-05-16 20:05:20.475529] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:c1c1c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.336 [2024-05-16 20:05:20.475540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.336 [2024-05-16 20:05:20.475598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:c1c1c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.336 [2024-05-16 20:05:20.475608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.597 #45 NEW cov: 12080 ft: 15125 corp: 34/1013b lim: 40 exec/s: 45 rss: 74Mb L: 33/38 MS: 1 ChangeBinInt- 00:06:33.597 [2024-05-16 20:05:20.525307] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0ac1c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.597 [2024-05-16 20:05:20.525328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.597 [2024-05-16 20:05:20.525405] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:c1c1c1c1 cdw11:c1c1c123 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.597 [2024-05-16 20:05:20.525416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.597 [2024-05-16 20:05:20.525476] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:c1c1c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.597 [2024-05-16 20:05:20.525487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.597 #46 NEW cov: 12080 ft: 15151 corp: 35/1037b lim: 40 exec/s: 46 rss: 74Mb L: 24/38 MS: 1 CopyPart- 00:06:33.597 [2024-05-16 20:05:20.565585] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0ac1c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.597 [2024-05-16 20:05:20.565606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.597 [2024-05-16 20:05:20.565683] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:c1c1c9c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.597 [2024-05-16 20:05:20.565694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.597 [2024-05-16 20:05:20.565753] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:c1c1c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.597 [2024-05-16 20:05:20.565763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.597 [2024-05-16 20:05:20.565819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:c1c1c9c1 cdw11:2fc1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.597 [2024-05-16 20:05:20.565829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.597 #47 NEW cov: 12080 ft: 15164 corp: 36/1076b lim: 40 exec/s: 47 rss: 74Mb L: 39/39 MS: 1 CrossOver- 00:06:33.597 [2024-05-16 20:05:20.605676] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a2fc1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.597 [2024-05-16 20:05:20.605698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.597 [2024-05-16 20:05:20.605771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:c1c1c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.597 [2024-05-16 20:05:20.605783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.597 [2024-05-16 20:05:20.605844] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:c1c1c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.597 [2024-05-16 20:05:20.605855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.597 [2024-05-16 20:05:20.605912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:c1c100c1 cdw11:c1c1c1c9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.597 [2024-05-16 20:05:20.605923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.597 #48 NEW cov: 12080 ft: 15182 corp: 37/1110b lim: 40 exec/s: 48 rss: 74Mb L: 34/39 MS: 1 CrossOver- 00:06:33.597 [2024-05-16 20:05:20.655503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00004f4f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.597 [2024-05-16 20:05:20.655524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.597 [2024-05-16 20:05:20.655599] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:4f4f4f4f cdw11:4f4f4f4f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.597 [2024-05-16 20:05:20.655610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.597 #49 NEW cov: 12080 ft: 15190 corp: 38/1126b lim: 40 exec/s: 49 rss: 74Mb L: 16/39 MS: 1 EraseBytes- 00:06:33.597 [2024-05-16 20:05:20.705628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0ac1c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.597 [2024-05-16 20:05:20.705650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.597 [2024-05-16 20:05:20.705723] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:c1c1c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.597 [2024-05-16 20:05:20.705734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.597 #50 NEW cov: 12080 ft: 15201 corp: 39/1147b lim: 40 exec/s: 50 rss: 74Mb L: 21/39 MS: 1 EraseBytes- 00:06:33.857 [2024-05-16 20:05:20.756135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:4f4f4f4f cdw11:4f4f4f4f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.857 [2024-05-16 20:05:20.756156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.857 [2024-05-16 20:05:20.756215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:4f0ac1c1 cdw11:31c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.857 [2024-05-16 20:05:20.756225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.857 [2024-05-16 20:05:20.756280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:c1c1c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.857 [2024-05-16 20:05:20.756291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.857 [2024-05-16 20:05:20.756348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:c123c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.857 [2024-05-16 20:05:20.756358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.857 #51 NEW cov: 12080 ft: 15214 corp: 40/1181b lim: 40 exec/s: 51 rss: 74Mb L: 34/39 MS: 1 ShuffleBytes- 00:06:33.857 [2024-05-16 20:05:20.796095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:95959595 cdw11:95959595 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.857 [2024-05-16 20:05:20.796119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.857 [2024-05-16 20:05:20.796179] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:95959595 cdw11:95959595 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.857 [2024-05-16 20:05:20.796190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.857 [2024-05-16 20:05:20.796248] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:95959595 cdw11:95279595 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.857 [2024-05-16 20:05:20.796259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.857 #52 NEW cov: 12080 ft: 15229 corp: 41/1210b lim: 40 exec/s: 52 rss: 74Mb L: 29/39 MS: 1 ChangeByte- 00:06:33.857 [2024-05-16 20:05:20.846413] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0ac1c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.857 [2024-05-16 20:05:20.846434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.857 [2024-05-16 20:05:20.846494] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:c1c1c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.857 [2024-05-16 20:05:20.846505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.857 [2024-05-16 20:05:20.846561] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:c1c1c1c1 cdw11:c1c1c9c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.857 [2024-05-16 20:05:20.846572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.857 [2024-05-16 20:05:20.846632] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:c1c1c1c1 cdw11:c1c1c1c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.857 [2024-05-16 20:05:20.846642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.857 #53 NEW cov: 12080 ft: 15293 corp: 42/1249b lim: 40 exec/s: 53 rss: 74Mb L: 39/39 MS: 1 CrossOver- 00:06:33.857 [2024-05-16 20:05:20.896405] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00004f4f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.857 [2024-05-16 20:05:20.896427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.857 [2024-05-16 20:05:20.896485] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.857 [2024-05-16 20:05:20.896496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.857 [2024-05-16 20:05:20.896556] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ff4f4f4f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.857 [2024-05-16 20:05:20.896566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.857 #54 NEW cov: 12080 ft: 15295 corp: 43/1278b lim: 40 exec/s: 27 rss: 74Mb L: 29/39 MS: 1 InsertRepeatedBytes- 00:06:33.857 #54 DONE cov: 12080 ft: 15295 corp: 43/1278b lim: 40 exec/s: 27 rss: 74Mb 00:06:33.857 ###### Recommended dictionary. ###### 00:06:33.857 "\031l\240X^\245\006\000" # Uses: 0 00:06:33.857 "\377\377\377\377" # Uses: 0 00:06:33.857 "\000\000\000\000\000\000\000\000" # Uses: 1 00:06:33.857 ###### End of recommended dictionary. ###### 00:06:33.857 Done 54 runs in 2 second(s) 00:06:33.857 [2024-05-16 20:05:20.931653] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:34.116 20:05:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_11.conf /var/tmp/suppress_nvmf_fuzz 00:06:34.116 20:05:21 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:34.116 20:05:21 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:34.116 20:05:21 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 12 1 0x1 00:06:34.116 20:05:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=12 00:06:34.116 20:05:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:34.116 20:05:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:34.116 20:05:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:06:34.116 20:05:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_12.conf 00:06:34.116 20:05:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:34.116 20:05:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:34.116 20:05:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 12 00:06:34.116 20:05:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4412 00:06:34.116 20:05:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:06:34.116 20:05:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' 00:06:34.116 20:05:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4412"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:34.116 20:05:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:34.116 20:05:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:34.116 20:05:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' -c /tmp/fuzz_json_12.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 -Z 12 00:06:34.116 [2024-05-16 20:05:21.100913] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:34.116 [2024-05-16 20:05:21.100993] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1667940 ] 00:06:34.116 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.376 [2024-05-16 20:05:21.270356] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.376 [2024-05-16 20:05:21.340050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.376 [2024-05-16 20:05:21.398808] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:34.376 [2024-05-16 20:05:21.414767] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:34.376 [2024-05-16 20:05:21.415117] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4412 *** 00:06:34.376 INFO: Running with entropic power schedule (0xFF, 100). 00:06:34.376 INFO: Seed: 1999891461 00:06:34.376 INFO: Loaded 1 modules (357283 inline 8-bit counters): 357283 [0x299c0cc, 0x29f346f), 00:06:34.376 INFO: Loaded 1 PC tables (357283 PCs): 357283 [0x29f3470,0x2f66ea0), 00:06:34.376 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:06:34.376 INFO: A corpus is not provided, starting from an empty corpus 00:06:34.376 #2 INITED exec/s: 0 rss: 64Mb 00:06:34.376 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:34.376 This may also happen if the target rejected all inputs we tried so far 00:06:34.376 [2024-05-16 20:05:21.483022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.376 [2024-05-16 20:05:21.483058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.376 [2024-05-16 20:05:21.483149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.376 [2024-05-16 20:05:21.483163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.376 [2024-05-16 20:05:21.483251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.376 [2024-05-16 20:05:21.483266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.635 NEW_FUNC[1/686]: 0x493670 in fuzz_admin_directive_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:241 00:06:34.635 NEW_FUNC[2/686]: 0x4bf420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:34.635 #27 NEW cov: 11831 ft: 11830 corp: 2/26b lim: 40 exec/s: 0 rss: 71Mb L: 25/25 MS: 5 ChangeBit-ChangeBit-CopyPart-CopyPart-InsertRepeatedBytes- 00:06:34.635 [2024-05-16 20:05:21.643051] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:265a5a5a cdw11:5a5a5a5a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.635 [2024-05-16 20:05:21.643084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.635 [2024-05-16 20:05:21.643173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:5a5a5a5a cdw11:5a5a5a5a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.635 [2024-05-16 20:05:21.643188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.635 #29 NEW cov: 11964 ft: 12825 corp: 3/46b lim: 40 exec/s: 0 rss: 71Mb L: 20/25 MS: 2 ChangeByte-InsertRepeatedBytes- 00:06:34.635 [2024-05-16 20:05:21.692766] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.635 [2024-05-16 20:05:21.692791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.635 #30 NEW cov: 11970 ft: 13703 corp: 4/60b lim: 40 exec/s: 0 rss: 71Mb L: 14/25 MS: 1 EraseBytes- 00:06:34.635 [2024-05-16 20:05:21.753032] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.635 [2024-05-16 20:05:21.753055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.894 #31 NEW cov: 12055 ft: 13926 corp: 5/69b lim: 40 exec/s: 0 rss: 71Mb L: 9/25 MS: 1 EraseBytes- 00:06:34.894 [2024-05-16 20:05:21.813532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.894 [2024-05-16 20:05:21.813554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.894 [2024-05-16 20:05:21.813641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.894 [2024-05-16 20:05:21.813655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.894 #42 NEW cov: 12055 ft: 14085 corp: 6/89b lim: 40 exec/s: 0 rss: 71Mb L: 20/25 MS: 1 InsertRepeatedBytes- 00:06:34.894 [2024-05-16 20:05:21.863737] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.894 [2024-05-16 20:05:21.863760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.894 #43 NEW cov: 12055 ft: 14226 corp: 7/100b lim: 40 exec/s: 0 rss: 72Mb L: 11/25 MS: 1 EraseBytes- 00:06:34.894 [2024-05-16 20:05:21.924132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.894 [2024-05-16 20:05:21.924155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.894 #44 NEW cov: 12055 ft: 14281 corp: 8/114b lim: 40 exec/s: 0 rss: 72Mb L: 14/25 MS: 1 ShuffleBytes- 00:06:34.894 [2024-05-16 20:05:21.975041] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:da5a5a5a cdw11:5a5a5a5a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.894 [2024-05-16 20:05:21.975064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.894 [2024-05-16 20:05:21.975147] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:5a5a5a5a cdw11:5a5a5a5a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.894 [2024-05-16 20:05:21.975160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.894 #45 NEW cov: 12055 ft: 14342 corp: 9/134b lim: 40 exec/s: 0 rss: 72Mb L: 20/25 MS: 1 ChangeBinInt- 00:06:34.894 [2024-05-16 20:05:22.035341] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:da5a5a5a cdw11:5a5a5a5a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.894 [2024-05-16 20:05:22.035365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.894 [2024-05-16 20:05:22.035458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:905a5a5a cdw11:5a5a5a5a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.894 [2024-05-16 20:05:22.035471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.153 #46 NEW cov: 12055 ft: 14359 corp: 10/155b lim: 40 exec/s: 0 rss: 72Mb L: 21/25 MS: 1 InsertByte- 00:06:35.153 [2024-05-16 20:05:22.095802] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:da5a5a5a cdw11:5a5a5a5a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.153 [2024-05-16 20:05:22.095824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.153 [2024-05-16 20:05:22.095912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:905a5a5a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.153 [2024-05-16 20:05:22.095925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.153 [2024-05-16 20:05:22.096006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:5a5a5a5a cdw11:5a5a5a5a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.153 [2024-05-16 20:05:22.096017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:35.153 #47 NEW cov: 12055 ft: 14444 corp: 11/180b lim: 40 exec/s: 0 rss: 72Mb L: 25/25 MS: 1 CMP- DE: "\377\377\377\377"- 00:06:35.153 [2024-05-16 20:05:22.155263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.153 [2024-05-16 20:05:22.155286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.153 #48 NEW cov: 12055 ft: 14450 corp: 12/189b lim: 40 exec/s: 0 rss: 72Mb L: 9/25 MS: 1 ChangeByte- 00:06:35.153 [2024-05-16 20:05:22.215773] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0aff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.153 [2024-05-16 20:05:22.215796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.153 #49 NEW cov: 12055 ft: 14508 corp: 13/200b lim: 40 exec/s: 0 rss: 72Mb L: 11/25 MS: 1 CopyPart- 00:06:35.153 [2024-05-16 20:05:22.276463] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:da5a5a41 cdw11:5a5a5a5a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.153 [2024-05-16 20:05:22.276487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.153 [2024-05-16 20:05:22.276571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:905a5a5a cdw11:5a5a5a5a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.154 [2024-05-16 20:05:22.276585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.154 #50 NEW cov: 12055 ft: 14532 corp: 14/221b lim: 40 exec/s: 0 rss: 72Mb L: 21/25 MS: 1 ChangeByte- 00:06:35.412 [2024-05-16 20:05:22.326231] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:f7ffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.412 [2024-05-16 20:05:22.326256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.412 NEW_FUNC[1/1]: 0x1a6ef60 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:35.412 #51 NEW cov: 12078 ft: 14549 corp: 15/230b lim: 40 exec/s: 0 rss: 72Mb L: 9/25 MS: 1 ChangeBit- 00:06:35.412 [2024-05-16 20:05:22.377004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:da5a5a5a cdw11:265a5a5a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.412 [2024-05-16 20:05:22.377027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.412 [2024-05-16 20:05:22.377122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:905a5a5a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.412 [2024-05-16 20:05:22.377136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.412 [2024-05-16 20:05:22.377219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:5a5a5a5a cdw11:5a5a5a5a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.412 [2024-05-16 20:05:22.377230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:35.412 #52 NEW cov: 12078 ft: 14561 corp: 16/255b lim: 40 exec/s: 0 rss: 72Mb L: 25/25 MS: 1 ChangeByte- 00:06:35.412 [2024-05-16 20:05:22.436800] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff94ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.412 [2024-05-16 20:05:22.436824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.412 #53 NEW cov: 12078 ft: 14640 corp: 17/267b lim: 40 exec/s: 53 rss: 72Mb L: 12/25 MS: 1 InsertByte- 00:06:35.412 [2024-05-16 20:05:22.487232] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.412 [2024-05-16 20:05:22.487259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.412 #54 NEW cov: 12078 ft: 14658 corp: 18/282b lim: 40 exec/s: 54 rss: 72Mb L: 15/25 MS: 1 PersAutoDict- DE: "\377\377\377\377"- 00:06:35.412 [2024-05-16 20:05:22.538128] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.412 [2024-05-16 20:05:22.538157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.412 [2024-05-16 20:05:22.538246] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.412 [2024-05-16 20:05:22.538261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.412 [2024-05-16 20:05:22.538349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.413 [2024-05-16 20:05:22.538361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:35.671 #55 NEW cov: 12078 ft: 14690 corp: 19/313b lim: 40 exec/s: 55 rss: 72Mb L: 31/31 MS: 1 InsertRepeatedBytes- 00:06:35.671 [2024-05-16 20:05:22.597703] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.671 [2024-05-16 20:05:22.597727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.672 #56 NEW cov: 12078 ft: 14693 corp: 20/327b lim: 40 exec/s: 56 rss: 72Mb L: 14/31 MS: 1 EraseBytes- 00:06:35.672 [2024-05-16 20:05:22.647775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ff0800ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.672 [2024-05-16 20:05:22.647801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.672 #57 NEW cov: 12078 ft: 14714 corp: 21/336b lim: 40 exec/s: 57 rss: 72Mb L: 9/31 MS: 1 CMP- DE: "\010\000"- 00:06:35.672 [2024-05-16 20:05:22.698039] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff94ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.672 [2024-05-16 20:05:22.698066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.672 #58 NEW cov: 12078 ft: 14726 corp: 22/348b lim: 40 exec/s: 58 rss: 72Mb L: 12/31 MS: 1 ShuffleBytes- 00:06:35.672 [2024-05-16 20:05:22.768330] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0800 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.672 [2024-05-16 20:05:22.768355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.672 #59 NEW cov: 12078 ft: 14736 corp: 23/360b lim: 40 exec/s: 59 rss: 72Mb L: 12/31 MS: 1 PersAutoDict- DE: "\010\000"- 00:06:35.672 [2024-05-16 20:05:22.818418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff9500 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.672 [2024-05-16 20:05:22.818444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.931 #60 NEW cov: 12078 ft: 14749 corp: 24/372b lim: 40 exec/s: 60 rss: 72Mb L: 12/31 MS: 1 ChangeBinInt- 00:06:35.931 [2024-05-16 20:05:22.869773] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:da5a5a5a cdw11:5a5a5a5a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.931 [2024-05-16 20:05:22.869797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.931 [2024-05-16 20:05:22.869877] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:905a5a5a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.931 [2024-05-16 20:05:22.869891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.931 [2024-05-16 20:05:22.869974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:5a5a5a5a cdw11:5a5ada5a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.931 [2024-05-16 20:05:22.869986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:35.931 [2024-05-16 20:05:22.870069] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:5a5a5a5a cdw11:5a5affff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.931 [2024-05-16 20:05:22.870084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:35.931 #61 NEW cov: 12078 ft: 15058 corp: 25/408b lim: 40 exec/s: 61 rss: 72Mb L: 36/36 MS: 1 CopyPart- 00:06:35.931 [2024-05-16 20:05:22.918895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffff0100 cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.931 [2024-05-16 20:05:22.918918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.931 #62 NEW cov: 12078 ft: 15073 corp: 26/422b lim: 40 exec/s: 62 rss: 73Mb L: 14/36 MS: 1 ChangeBinInt- 00:06:35.931 [2024-05-16 20:05:22.969368] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffff03 cdw11:00ffff08 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.931 [2024-05-16 20:05:22.969391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.931 #63 NEW cov: 12078 ft: 15084 corp: 27/433b lim: 40 exec/s: 63 rss: 73Mb L: 11/36 MS: 1 CMP- DE: "\003\000"- 00:06:35.931 [2024-05-16 20:05:23.029647] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.931 [2024-05-16 20:05:23.029671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.931 #64 NEW cov: 12078 ft: 15134 corp: 28/447b lim: 40 exec/s: 64 rss: 73Mb L: 14/36 MS: 1 CopyPart- 00:06:36.190 [2024-05-16 20:05:23.090211] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:f7ffffff cdw11:ffffff5b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.190 [2024-05-16 20:05:23.090235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.190 [2024-05-16 20:05:23.090327] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:2946f15f cdw11:a50600ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.190 [2024-05-16 20:05:23.090340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.190 #65 NEW cov: 12078 ft: 15180 corp: 29/464b lim: 40 exec/s: 65 rss: 73Mb L: 17/36 MS: 1 CMP- DE: "[)F\361_\245\006\000"- 00:06:36.190 [2024-05-16 20:05:23.150058] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0200ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.190 [2024-05-16 20:05:23.150081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.190 #66 NEW cov: 12078 ft: 15189 corp: 30/479b lim: 40 exec/s: 66 rss: 73Mb L: 15/36 MS: 1 ChangeBinInt- 00:06:36.190 [2024-05-16 20:05:23.200957] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0ada5a5a cdw11:5a265a26 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.190 [2024-05-16 20:05:23.200981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.190 [2024-05-16 20:05:23.201067] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:5a5affff cdw11:ffff905a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.190 [2024-05-16 20:05:23.201080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.190 #69 NEW cov: 12078 ft: 15260 corp: 31/497b lim: 40 exec/s: 69 rss: 73Mb L: 18/36 MS: 3 ShuffleBytes-InsertByte-CrossOver- 00:06:36.190 [2024-05-16 20:05:23.251975] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:da5a5a5a cdw11:5a5a5a5a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.190 [2024-05-16 20:05:23.252000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.190 [2024-05-16 20:05:23.252080] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:5a5b2946 cdw11:f15fa506 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.190 [2024-05-16 20:05:23.252097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.190 [2024-05-16 20:05:23.252176] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:005a5a5a cdw11:5a5a5a5a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.190 [2024-05-16 20:05:23.252189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.190 #70 NEW cov: 12078 ft: 15281 corp: 32/525b lim: 40 exec/s: 70 rss: 73Mb L: 28/36 MS: 1 PersAutoDict- DE: "[)F\361_\245\006\000"- 00:06:36.190 [2024-05-16 20:05:23.301262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:feffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.190 [2024-05-16 20:05:23.301284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.190 #71 NEW cov: 12078 ft: 15282 corp: 33/534b lim: 40 exec/s: 71 rss: 73Mb L: 9/36 MS: 1 ChangeBinInt- 00:06:36.450 [2024-05-16 20:05:23.361410] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffff03 cdw11:00ffff08 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.450 [2024-05-16 20:05:23.361435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.450 #72 NEW cov: 12078 ft: 15378 corp: 34/545b lim: 40 exec/s: 72 rss: 73Mb L: 11/36 MS: 1 ChangeByte- 00:06:36.450 [2024-05-16 20:05:23.421967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffefffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.450 [2024-05-16 20:05:23.421990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.450 #73 NEW cov: 12078 ft: 15382 corp: 35/559b lim: 40 exec/s: 36 rss: 74Mb L: 14/36 MS: 1 ChangeBit- 00:06:36.450 #73 DONE cov: 12078 ft: 15382 corp: 35/559b lim: 40 exec/s: 36 rss: 74Mb 00:06:36.450 ###### Recommended dictionary. ###### 00:06:36.450 "\377\377\377\377" # Uses: 1 00:06:36.450 "\010\000" # Uses: 1 00:06:36.450 "\003\000" # Uses: 0 00:06:36.450 "[)F\361_\245\006\000" # Uses: 1 00:06:36.450 ###### End of recommended dictionary. ###### 00:06:36.450 Done 73 runs in 2 second(s) 00:06:36.450 [2024-05-16 20:05:23.456568] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:36.450 20:05:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_12.conf /var/tmp/suppress_nvmf_fuzz 00:06:36.450 20:05:23 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:36.450 20:05:23 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:36.450 20:05:23 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 13 1 0x1 00:06:36.450 20:05:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=13 00:06:36.450 20:05:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:36.450 20:05:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:36.450 20:05:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:06:36.450 20:05:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_13.conf 00:06:36.450 20:05:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:36.450 20:05:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:36.450 20:05:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 13 00:06:36.450 20:05:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4413 00:06:36.450 20:05:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:06:36.450 20:05:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' 00:06:36.450 20:05:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4413"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:36.450 20:05:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:36.450 20:05:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:36.450 20:05:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' -c /tmp/fuzz_json_13.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 -Z 13 00:06:36.708 [2024-05-16 20:05:23.617321] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:36.708 [2024-05-16 20:05:23.617398] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1668384 ] 00:06:36.708 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.708 [2024-05-16 20:05:23.776420] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.708 [2024-05-16 20:05:23.841040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.966 [2024-05-16 20:05:23.899834] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:36.966 [2024-05-16 20:05:23.915792] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:36.966 [2024-05-16 20:05:23.916140] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4413 *** 00:06:36.966 INFO: Running with entropic power schedule (0xFF, 100). 00:06:36.966 INFO: Seed: 204931332 00:06:36.966 INFO: Loaded 1 modules (357283 inline 8-bit counters): 357283 [0x299c0cc, 0x29f346f), 00:06:36.966 INFO: Loaded 1 PC tables (357283 PCs): 357283 [0x29f3470,0x2f66ea0), 00:06:36.966 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:06:36.966 INFO: A corpus is not provided, starting from an empty corpus 00:06:36.966 #2 INITED exec/s: 0 rss: 63Mb 00:06:36.966 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:36.966 This may also happen if the target rejected all inputs we tried so far 00:06:36.966 [2024-05-16 20:05:23.984593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.966 [2024-05-16 20:05:23.984636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.966 [2024-05-16 20:05:23.984747] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.966 [2024-05-16 20:05:23.984762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.966 [2024-05-16 20:05:23.984865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.966 [2024-05-16 20:05:23.984878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.967 [2024-05-16 20:05:23.984975] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.967 [2024-05-16 20:05:23.984990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:37.226 NEW_FUNC[1/685]: 0x495230 in fuzz_admin_directive_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:257 00:06:37.226 NEW_FUNC[2/685]: 0x4bf420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:37.226 #7 NEW cov: 11822 ft: 11819 corp: 2/36b lim: 40 exec/s: 0 rss: 70Mb L: 35/35 MS: 5 CopyPart-ChangeBit-ChangeBit-CrossOver-InsertRepeatedBytes- 00:06:37.226 [2024-05-16 20:05:24.154276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.226 [2024-05-16 20:05:24.154314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.226 [2024-05-16 20:05:24.154401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:faffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.226 [2024-05-16 20:05:24.154416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.226 [2024-05-16 20:05:24.154505] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.226 [2024-05-16 20:05:24.154520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.226 [2024-05-16 20:05:24.154602] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.226 [2024-05-16 20:05:24.154615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:37.226 #18 NEW cov: 11952 ft: 12438 corp: 3/71b lim: 40 exec/s: 0 rss: 70Mb L: 35/35 MS: 1 ChangeBinInt- 00:06:37.226 [2024-05-16 20:05:24.213785] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.226 [2024-05-16 20:05:24.213809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.226 [2024-05-16 20:05:24.213888] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.226 [2024-05-16 20:05:24.213901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.226 #19 NEW cov: 11958 ft: 13228 corp: 4/93b lim: 40 exec/s: 0 rss: 70Mb L: 22/35 MS: 1 EraseBytes- 00:06:37.226 [2024-05-16 20:05:24.263869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.226 [2024-05-16 20:05:24.263893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.226 [2024-05-16 20:05:24.263979] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.226 [2024-05-16 20:05:24.263993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.226 #20 NEW cov: 12043 ft: 13455 corp: 5/115b lim: 40 exec/s: 0 rss: 70Mb L: 22/35 MS: 1 CrossOver- 00:06:37.226 [2024-05-16 20:05:24.324130] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.226 [2024-05-16 20:05:24.324154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.226 [2024-05-16 20:05:24.324239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.226 [2024-05-16 20:05:24.324252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.226 #21 NEW cov: 12043 ft: 13631 corp: 6/132b lim: 40 exec/s: 0 rss: 71Mb L: 17/35 MS: 1 EraseBytes- 00:06:37.485 [2024-05-16 20:05:24.384394] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.485 [2024-05-16 20:05:24.384421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.485 [2024-05-16 20:05:24.384510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.485 [2024-05-16 20:05:24.384523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.485 #22 NEW cov: 12043 ft: 13716 corp: 7/149b lim: 40 exec/s: 0 rss: 71Mb L: 17/35 MS: 1 ShuffleBytes- 00:06:37.485 [2024-05-16 20:05:24.444547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:31ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.485 [2024-05-16 20:05:24.444570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.485 [2024-05-16 20:05:24.444649] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.485 [2024-05-16 20:05:24.444663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.485 #25 NEW cov: 12043 ft: 13778 corp: 8/167b lim: 40 exec/s: 0 rss: 71Mb L: 18/35 MS: 3 ShuffleBytes-ChangeByte-CrossOver- 00:06:37.485 [2024-05-16 20:05:24.495356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffff48ff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.485 [2024-05-16 20:05:24.495379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.485 [2024-05-16 20:05:24.495471] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.485 [2024-05-16 20:05:24.495496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.485 [2024-05-16 20:05:24.495574] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.485 [2024-05-16 20:05:24.495585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.485 [2024-05-16 20:05:24.495662] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.485 [2024-05-16 20:05:24.495674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:37.485 #26 NEW cov: 12043 ft: 13811 corp: 9/202b lim: 40 exec/s: 0 rss: 71Mb L: 35/35 MS: 1 ChangeByte- 00:06:37.485 [2024-05-16 20:05:24.545578] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffff48ff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.485 [2024-05-16 20:05:24.545600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.485 [2024-05-16 20:05:24.545684] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.485 [2024-05-16 20:05:24.545697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.485 [2024-05-16 20:05:24.545773] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.485 [2024-05-16 20:05:24.545785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.485 [2024-05-16 20:05:24.545872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.485 [2024-05-16 20:05:24.545887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:37.485 #27 NEW cov: 12043 ft: 13833 corp: 10/239b lim: 40 exec/s: 0 rss: 71Mb L: 37/37 MS: 1 CrossOver- 00:06:37.486 [2024-05-16 20:05:24.605275] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:31ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.486 [2024-05-16 20:05:24.605299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.486 [2024-05-16 20:05:24.605381] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ff2dffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.486 [2024-05-16 20:05:24.605396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.746 #28 NEW cov: 12043 ft: 13876 corp: 11/257b lim: 40 exec/s: 0 rss: 71Mb L: 18/37 MS: 1 ChangeByte- 00:06:37.746 [2024-05-16 20:05:24.665806] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.746 [2024-05-16 20:05:24.665831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.746 [2024-05-16 20:05:24.665920] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.746 [2024-05-16 20:05:24.665935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.746 [2024-05-16 20:05:24.666014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.746 [2024-05-16 20:05:24.666027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.746 #29 NEW cov: 12043 ft: 14072 corp: 12/286b lim: 40 exec/s: 0 rss: 71Mb L: 29/37 MS: 1 InsertRepeatedBytes- 00:06:37.746 [2024-05-16 20:05:24.716303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.746 [2024-05-16 20:05:24.716327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.746 [2024-05-16 20:05:24.716405] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:7fffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.746 [2024-05-16 20:05:24.716419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.746 [2024-05-16 20:05:24.716506] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.746 [2024-05-16 20:05:24.716519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.746 [2024-05-16 20:05:24.716611] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.746 [2024-05-16 20:05:24.716625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:37.746 #30 NEW cov: 12043 ft: 14092 corp: 13/321b lim: 40 exec/s: 0 rss: 71Mb L: 35/37 MS: 1 ChangeBit- 00:06:37.746 [2024-05-16 20:05:24.766286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffff48ff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.746 [2024-05-16 20:05:24.766312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.746 [2024-05-16 20:05:24.766394] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.746 [2024-05-16 20:05:24.766407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.746 [2024-05-16 20:05:24.766500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.746 [2024-05-16 20:05:24.766512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.746 #31 NEW cov: 12043 ft: 14171 corp: 14/349b lim: 40 exec/s: 0 rss: 72Mb L: 28/37 MS: 1 EraseBytes- 00:06:37.746 [2024-05-16 20:05:24.816797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.746 [2024-05-16 20:05:24.816820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.746 [2024-05-16 20:05:24.816916] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:faffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.746 [2024-05-16 20:05:24.816929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.746 [2024-05-16 20:05:24.817011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:fffffff4 cdw11:f4f4ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.746 [2024-05-16 20:05:24.817024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.746 [2024-05-16 20:05:24.817109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.747 [2024-05-16 20:05:24.817122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:37.747 NEW_FUNC[1/1]: 0x1a6ef60 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:37.747 #32 NEW cov: 12066 ft: 14211 corp: 15/387b lim: 40 exec/s: 0 rss: 72Mb L: 38/38 MS: 1 InsertRepeatedBytes- 00:06:37.747 [2024-05-16 20:05:24.887037] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ff41ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.747 [2024-05-16 20:05:24.887060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.747 [2024-05-16 20:05:24.887144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:fffaffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.747 [2024-05-16 20:05:24.887158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.747 [2024-05-16 20:05:24.887237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:f4f4f4ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.747 [2024-05-16 20:05:24.887249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.747 [2024-05-16 20:05:24.887330] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.747 [2024-05-16 20:05:24.887342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:38.006 #33 NEW cov: 12066 ft: 14230 corp: 16/426b lim: 40 exec/s: 0 rss: 72Mb L: 39/39 MS: 1 InsertByte- 00:06:38.006 [2024-05-16 20:05:24.946650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.006 [2024-05-16 20:05:24.946673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.006 [2024-05-16 20:05:24.946763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffff1e cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.006 [2024-05-16 20:05:24.946776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.006 #34 NEW cov: 12066 ft: 14238 corp: 17/444b lim: 40 exec/s: 34 rss: 72Mb L: 18/39 MS: 1 InsertByte- 00:06:38.006 [2024-05-16 20:05:25.007128] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.006 [2024-05-16 20:05:25.007152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.006 [2024-05-16 20:05:25.007236] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0aff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.006 [2024-05-16 20:05:25.007251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.006 [2024-05-16 20:05:25.007317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.006 [2024-05-16 20:05:25.007328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.006 #35 NEW cov: 12066 ft: 14252 corp: 18/473b lim: 40 exec/s: 35 rss: 72Mb L: 29/39 MS: 1 CopyPart- 00:06:38.006 [2024-05-16 20:05:25.057603] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.006 [2024-05-16 20:05:25.057628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.006 [2024-05-16 20:05:25.057722] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:7fffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.006 [2024-05-16 20:05:25.057736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.006 [2024-05-16 20:05:25.057817] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff7aff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.006 [2024-05-16 20:05:25.057830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.006 [2024-05-16 20:05:25.057918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.006 [2024-05-16 20:05:25.057930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:38.006 #36 NEW cov: 12066 ft: 14287 corp: 19/509b lim: 40 exec/s: 36 rss: 72Mb L: 36/39 MS: 1 InsertByte- 00:06:38.006 [2024-05-16 20:05:25.116940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.006 [2024-05-16 20:05:25.116963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.006 #37 NEW cov: 12066 ft: 14621 corp: 20/519b lim: 40 exec/s: 37 rss: 72Mb L: 10/39 MS: 1 EraseBytes- 00:06:38.266 [2024-05-16 20:05:25.167175] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.266 [2024-05-16 20:05:25.167203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.266 #38 NEW cov: 12066 ft: 14650 corp: 21/532b lim: 40 exec/s: 38 rss: 72Mb L: 13/39 MS: 1 CrossOver- 00:06:38.266 [2024-05-16 20:05:25.228287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffff48ff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.266 [2024-05-16 20:05:25.228312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.266 [2024-05-16 20:05:25.228403] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.266 [2024-05-16 20:05:25.228417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.266 [2024-05-16 20:05:25.228505] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffff24 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.266 [2024-05-16 20:05:25.228516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.266 [2024-05-16 20:05:25.228591] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.266 [2024-05-16 20:05:25.228604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:38.266 #39 NEW cov: 12066 ft: 14684 corp: 22/568b lim: 40 exec/s: 39 rss: 72Mb L: 36/39 MS: 1 InsertByte- 00:06:38.266 [2024-05-16 20:05:25.277603] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.266 [2024-05-16 20:05:25.277627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.266 #40 NEW cov: 12066 ft: 14723 corp: 23/578b lim: 40 exec/s: 40 rss: 72Mb L: 10/39 MS: 1 ChangeByte- 00:06:38.266 [2024-05-16 20:05:25.328100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:31ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.266 [2024-05-16 20:05:25.328125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.266 [2024-05-16 20:05:25.328221] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.266 [2024-05-16 20:05:25.328236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.266 #41 NEW cov: 12066 ft: 14752 corp: 24/596b lim: 40 exec/s: 41 rss: 72Mb L: 18/39 MS: 1 ChangeBit- 00:06:38.266 [2024-05-16 20:05:25.378863] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:fffffbff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.266 [2024-05-16 20:05:25.378887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.266 [2024-05-16 20:05:25.378969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:7fffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.266 [2024-05-16 20:05:25.378984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.266 [2024-05-16 20:05:25.379062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.266 [2024-05-16 20:05:25.379074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.266 [2024-05-16 20:05:25.379156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.266 [2024-05-16 20:05:25.379169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:38.266 #42 NEW cov: 12066 ft: 14831 corp: 25/631b lim: 40 exec/s: 42 rss: 72Mb L: 35/39 MS: 1 ChangeBit- 00:06:38.525 [2024-05-16 20:05:25.429062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffff48ff cdw11:ffff0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.525 [2024-05-16 20:05:25.429088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.525 [2024-05-16 20:05:25.429163] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.525 [2024-05-16 20:05:25.429177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.525 [2024-05-16 20:05:25.429264] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.525 [2024-05-16 20:05:25.429277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.525 [2024-05-16 20:05:25.429359] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.525 [2024-05-16 20:05:25.429372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:38.525 #43 NEW cov: 12066 ft: 14855 corp: 26/666b lim: 40 exec/s: 43 rss: 72Mb L: 35/39 MS: 1 InsertRepeatedBytes- 00:06:38.525 [2024-05-16 20:05:25.498513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:6affffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.525 [2024-05-16 20:05:25.498537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.525 #44 NEW cov: 12066 ft: 14879 corp: 27/676b lim: 40 exec/s: 44 rss: 72Mb L: 10/39 MS: 1 ChangeByte- 00:06:38.525 [2024-05-16 20:05:25.558679] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.525 [2024-05-16 20:05:25.558702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.525 #45 NEW cov: 12066 ft: 14895 corp: 28/687b lim: 40 exec/s: 45 rss: 72Mb L: 11/39 MS: 1 InsertByte- 00:06:38.525 [2024-05-16 20:05:25.609109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:03ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.525 [2024-05-16 20:05:25.609133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.525 [2024-05-16 20:05:25.609221] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.525 [2024-05-16 20:05:25.609236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.525 #46 NEW cov: 12066 ft: 14911 corp: 29/710b lim: 40 exec/s: 46 rss: 72Mb L: 23/39 MS: 1 InsertByte- 00:06:38.525 [2024-05-16 20:05:25.659341] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:31ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.525 [2024-05-16 20:05:25.659365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.525 [2024-05-16 20:05:25.659453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffff53 cdw11:ffff2dff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.525 [2024-05-16 20:05:25.659472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.784 #47 NEW cov: 12066 ft: 14927 corp: 30/729b lim: 40 exec/s: 47 rss: 72Mb L: 19/39 MS: 1 InsertByte- 00:06:38.784 [2024-05-16 20:05:25.720180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.784 [2024-05-16 20:05:25.720203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.784 [2024-05-16 20:05:25.720288] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:7fffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.784 [2024-05-16 20:05:25.720301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.784 [2024-05-16 20:05:25.720376] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ff09ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.784 [2024-05-16 20:05:25.720388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.784 [2024-05-16 20:05:25.720471] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.784 [2024-05-16 20:05:25.720485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:38.784 #48 NEW cov: 12066 ft: 15052 corp: 31/764b lim: 40 exec/s: 48 rss: 72Mb L: 35/39 MS: 1 ChangeBinInt- 00:06:38.784 [2024-05-16 20:05:25.769758] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:31120000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.784 [2024-05-16 20:05:25.769781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.784 [2024-05-16 20:05:25.769876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00ffffff cdw11:ff2dffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.784 [2024-05-16 20:05:25.769889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.785 #49 NEW cov: 12066 ft: 15065 corp: 32/782b lim: 40 exec/s: 49 rss: 72Mb L: 18/39 MS: 1 ChangeBinInt- 00:06:38.785 [2024-05-16 20:05:25.819923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffff03 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.785 [2024-05-16 20:05:25.819946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.785 [2024-05-16 20:05:25.820033] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.785 [2024-05-16 20:05:25.820045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.785 #50 NEW cov: 12066 ft: 15114 corp: 33/798b lim: 40 exec/s: 50 rss: 72Mb L: 16/39 MS: 1 CrossOver- 00:06:38.785 [2024-05-16 20:05:25.880759] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffff48ff cdw11:ffff0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.785 [2024-05-16 20:05:25.880784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.785 [2024-05-16 20:05:25.880866] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.785 [2024-05-16 20:05:25.880882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.785 [2024-05-16 20:05:25.880966] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.785 [2024-05-16 20:05:25.880978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.785 [2024-05-16 20:05:25.881058] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ff7effff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.785 [2024-05-16 20:05:25.881070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:38.785 #51 NEW cov: 12066 ft: 15212 corp: 34/833b lim: 40 exec/s: 51 rss: 72Mb L: 35/39 MS: 1 ChangeByte- 00:06:39.044 [2024-05-16 20:05:25.940851] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.044 [2024-05-16 20:05:25.940873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.044 [2024-05-16 20:05:25.940959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.044 [2024-05-16 20:05:25.940971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.044 [2024-05-16 20:05:25.941049] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.044 [2024-05-16 20:05:25.941061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.044 #52 NEW cov: 12066 ft: 15239 corp: 35/862b lim: 40 exec/s: 26 rss: 73Mb L: 29/39 MS: 1 ChangeByte- 00:06:39.044 #52 DONE cov: 12066 ft: 15239 corp: 35/862b lim: 40 exec/s: 26 rss: 73Mb 00:06:39.044 Done 52 runs in 2 second(s) 00:06:39.044 [2024-05-16 20:05:25.975272] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:39.044 20:05:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_13.conf /var/tmp/suppress_nvmf_fuzz 00:06:39.044 20:05:26 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:39.044 20:05:26 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:39.044 20:05:26 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 14 1 0x1 00:06:39.044 20:05:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=14 00:06:39.044 20:05:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:39.044 20:05:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:39.044 20:05:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:06:39.044 20:05:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_14.conf 00:06:39.044 20:05:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:39.044 20:05:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:39.044 20:05:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 14 00:06:39.044 20:05:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4414 00:06:39.044 20:05:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:06:39.044 20:05:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' 00:06:39.044 20:05:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4414"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:39.044 20:05:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:39.044 20:05:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:39.044 20:05:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' -c /tmp/fuzz_json_14.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 -Z 14 00:06:39.044 [2024-05-16 20:05:26.134401] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:39.045 [2024-05-16 20:05:26.134486] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1668829 ] 00:06:39.045 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.304 [2024-05-16 20:05:26.294347] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.304 [2024-05-16 20:05:26.358308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.304 [2024-05-16 20:05:26.416671] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:39.304 [2024-05-16 20:05:26.432638] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:39.304 [2024-05-16 20:05:26.432991] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4414 *** 00:06:39.304 INFO: Running with entropic power schedule (0xFF, 100). 00:06:39.304 INFO: Seed: 2720908492 00:06:39.563 INFO: Loaded 1 modules (357283 inline 8-bit counters): 357283 [0x299c0cc, 0x29f346f), 00:06:39.563 INFO: Loaded 1 PC tables (357283 PCs): 357283 [0x29f3470,0x2f66ea0), 00:06:39.563 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:06:39.563 INFO: A corpus is not provided, starting from an empty corpus 00:06:39.563 #2 INITED exec/s: 0 rss: 64Mb 00:06:39.563 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:39.563 This may also happen if the target rejected all inputs we tried so far 00:06:39.563 [2024-05-16 20:05:26.482532] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.563 [2024-05-16 20:05:26.482567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.563 [2024-05-16 20:05:26.482635] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.563 [2024-05-16 20:05:26.482652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.563 [2024-05-16 20:05:26.482721] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.563 [2024-05-16 20:05:26.482737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.563 NEW_FUNC[1/687]: 0x496df0 in fuzz_admin_set_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:392 00:06:39.563 NEW_FUNC[2/687]: 0x4b82b0 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:06:39.563 #13 NEW cov: 11826 ft: 11817 corp: 2/28b lim: 35 exec/s: 0 rss: 71Mb L: 27/27 MS: 1 InsertRepeatedBytes- 00:06:39.563 [2024-05-16 20:05:26.632954] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.563 [2024-05-16 20:05:26.633001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.563 [2024-05-16 20:05:26.633086] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000a1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.563 [2024-05-16 20:05:26.633110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.563 [2024-05-16 20:05:26.633180] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000a1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.563 [2024-05-16 20:05:26.633198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.563 [2024-05-16 20:05:26.633267] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000a1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.563 [2024-05-16 20:05:26.633286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:39.563 #15 NEW cov: 11956 ft: 12638 corp: 3/60b lim: 35 exec/s: 0 rss: 71Mb L: 32/32 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:06:39.563 [2024-05-16 20:05:26.682653] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.563 [2024-05-16 20:05:26.682680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.563 [2024-05-16 20:05:26.682748] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.563 [2024-05-16 20:05:26.682762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.563 [2024-05-16 20:05:26.682816] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.563 [2024-05-16 20:05:26.682828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.823 #16 NEW cov: 11962 ft: 12846 corp: 4/87b lim: 35 exec/s: 0 rss: 71Mb L: 27/32 MS: 1 ShuffleBytes- 00:06:39.823 [2024-05-16 20:05:26.732972] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.823 [2024-05-16 20:05:26.733000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.823 [2024-05-16 20:05:26.733054] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000a1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.823 [2024-05-16 20:05:26.733066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.823 [2024-05-16 20:05:26.733119] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000a1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.823 [2024-05-16 20:05:26.733133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.823 [2024-05-16 20:05:26.733186] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000a1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.823 [2024-05-16 20:05:26.733199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:39.823 #17 NEW cov: 12047 ft: 13206 corp: 5/119b lim: 35 exec/s: 0 rss: 71Mb L: 32/32 MS: 1 ChangeBit- 00:06:39.823 [2024-05-16 20:05:26.783130] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000db SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.823 [2024-05-16 20:05:26.783156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.823 [2024-05-16 20:05:26.783211] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000db SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.823 [2024-05-16 20:05:26.783225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.823 [2024-05-16 20:05:26.783279] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000db SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.823 [2024-05-16 20:05:26.783291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.823 [2024-05-16 20:05:26.783358] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000db SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.823 [2024-05-16 20:05:26.783371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:39.823 #18 NEW cov: 12047 ft: 13349 corp: 6/148b lim: 35 exec/s: 0 rss: 71Mb L: 29/32 MS: 1 InsertRepeatedBytes- 00:06:39.823 [2024-05-16 20:05:26.823238] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000db SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.824 [2024-05-16 20:05:26.823263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.824 [2024-05-16 20:05:26.823320] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000db SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.824 [2024-05-16 20:05:26.823332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.824 [2024-05-16 20:05:26.823382] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000db SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.824 [2024-05-16 20:05:26.823394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.824 [2024-05-16 20:05:26.823447] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000db SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.824 [2024-05-16 20:05:26.823464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:39.824 #19 NEW cov: 12047 ft: 13434 corp: 7/178b lim: 35 exec/s: 0 rss: 72Mb L: 30/32 MS: 1 InsertByte- 00:06:39.824 [2024-05-16 20:05:26.873364] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.824 [2024-05-16 20:05:26.873389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.824 [2024-05-16 20:05:26.873442] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.824 [2024-05-16 20:05:26.873460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.824 [2024-05-16 20:05:26.873530] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.824 [2024-05-16 20:05:26.873543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.824 [2024-05-16 20:05:26.873595] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.824 [2024-05-16 20:05:26.873609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:39.824 #20 NEW cov: 12047 ft: 13525 corp: 8/206b lim: 35 exec/s: 0 rss: 72Mb L: 28/32 MS: 1 InsertByte- 00:06:39.824 [2024-05-16 20:05:26.913472] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000db SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.824 [2024-05-16 20:05:26.913498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.824 [2024-05-16 20:05:26.913552] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000db SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.824 [2024-05-16 20:05:26.913567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.824 [2024-05-16 20:05:26.913618] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000db SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.824 [2024-05-16 20:05:26.913631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.824 [2024-05-16 20:05:26.913684] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000db SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.824 [2024-05-16 20:05:26.913696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:39.824 #21 NEW cov: 12047 ft: 13570 corp: 9/236b lim: 35 exec/s: 0 rss: 72Mb L: 30/32 MS: 1 InsertByte- 00:06:39.824 [2024-05-16 20:05:26.953293] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.824 [2024-05-16 20:05:26.953318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.824 [2024-05-16 20:05:26.953374] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.824 [2024-05-16 20:05:26.953386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.083 #22 NEW cov: 12047 ft: 13813 corp: 10/250b lim: 35 exec/s: 0 rss: 72Mb L: 14/32 MS: 1 EraseBytes- 00:06:40.083 [2024-05-16 20:05:27.003245] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES LBA RANGE TYPE cid:4 cdw10:80000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.083 [2024-05-16 20:05:27.003271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.083 NEW_FUNC[1/1]: 0x4b3520 in feat_lba_range_type /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:289 00:06:40.083 #26 NEW cov: 12058 ft: 14649 corp: 11/259b lim: 35 exec/s: 0 rss: 72Mb L: 9/32 MS: 4 CMP-ChangeBinInt-ChangeBit-InsertRepeatedBytes- DE: "\021\000"- 00:06:40.083 NEW_FUNC[1/1]: 0x11e49e0 in nvmf_ctrlr_set_features_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:1759 00:06:40.083 #28 NEW cov: 12081 ft: 14721 corp: 12/266b lim: 35 exec/s: 0 rss: 72Mb L: 7/32 MS: 2 CrossOver-InsertByte- 00:06:40.083 [2024-05-16 20:05:27.084057] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000db SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.083 [2024-05-16 20:05:27.084088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.083 [2024-05-16 20:05:27.084149] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000db SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.083 [2024-05-16 20:05:27.084165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.083 [2024-05-16 20:05:27.084227] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000db SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.083 [2024-05-16 20:05:27.084244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.083 [2024-05-16 20:05:27.084303] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000db SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.083 [2024-05-16 20:05:27.084320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:40.083 #29 NEW cov: 12081 ft: 14736 corp: 13/298b lim: 35 exec/s: 0 rss: 72Mb L: 32/32 MS: 1 CopyPart- 00:06:40.083 [2024-05-16 20:05:27.133775] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000db SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.083 [2024-05-16 20:05:27.133804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.083 [2024-05-16 20:05:27.133857] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000db SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.083 [2024-05-16 20:05:27.133871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.083 #30 NEW cov: 12081 ft: 14758 corp: 14/313b lim: 35 exec/s: 0 rss: 72Mb L: 15/32 MS: 1 EraseBytes- 00:06:40.083 [2024-05-16 20:05:27.183928] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.083 [2024-05-16 20:05:27.183954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.083 [2024-05-16 20:05:27.184008] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000db SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.083 [2024-05-16 20:05:27.184021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.083 #32 NEW cov: 12081 ft: 14834 corp: 15/331b lim: 35 exec/s: 0 rss: 72Mb L: 18/32 MS: 2 InsertByte-CrossOver- 00:06:40.083 [2024-05-16 20:05:27.223933] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000db SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.083 [2024-05-16 20:05:27.223958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.342 #33 NEW cov: 12081 ft: 14861 corp: 16/339b lim: 35 exec/s: 0 rss: 72Mb L: 8/32 MS: 1 EraseBytes- 00:06:40.342 [2024-05-16 20:05:27.274381] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.342 [2024-05-16 20:05:27.274407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.342 [2024-05-16 20:05:27.274463] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.342 [2024-05-16 20:05:27.274477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.342 NEW_FUNC[1/2]: 0x4b4c30 in feat_error_recover /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:304 00:06:40.342 NEW_FUNC[2/2]: 0x11e0520 in nvmf_ctrlr_set_features_error_recovery /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:1717 00:06:40.342 #34 NEW cov: 12135 ft: 14925 corp: 17/366b lim: 35 exec/s: 0 rss: 72Mb L: 27/32 MS: 1 ChangeBinInt- 00:06:40.342 [2024-05-16 20:05:27.314618] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.342 [2024-05-16 20:05:27.314645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.342 [2024-05-16 20:05:27.314701] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.342 [2024-05-16 20:05:27.314714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.342 [2024-05-16 20:05:27.314766] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.342 [2024-05-16 20:05:27.314778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.342 [2024-05-16 20:05:27.314830] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000f7 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.342 [2024-05-16 20:05:27.314846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:40.342 #35 NEW cov: 12135 ft: 14942 corp: 18/394b lim: 35 exec/s: 0 rss: 72Mb L: 28/32 MS: 1 ChangeBit- 00:06:40.343 [2024-05-16 20:05:27.364781] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.343 [2024-05-16 20:05:27.364806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.343 [2024-05-16 20:05:27.364878] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000a1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.343 [2024-05-16 20:05:27.364891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.343 [2024-05-16 20:05:27.364941] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000a1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.343 [2024-05-16 20:05:27.364954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.343 [2024-05-16 20:05:27.365008] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000a1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.343 [2024-05-16 20:05:27.365021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:40.343 NEW_FUNC[1/1]: 0x1a6ef60 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:40.343 #36 NEW cov: 12158 ft: 14976 corp: 19/426b lim: 35 exec/s: 0 rss: 72Mb L: 32/32 MS: 1 ShuffleBytes- 00:06:40.343 [2024-05-16 20:05:27.414928] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000db SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.343 [2024-05-16 20:05:27.414954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.343 [2024-05-16 20:05:27.415007] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000db SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.343 [2024-05-16 20:05:27.415020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.343 [2024-05-16 20:05:27.415075] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000db SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.343 [2024-05-16 20:05:27.415087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.343 [2024-05-16 20:05:27.415138] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000db SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.343 [2024-05-16 20:05:27.415151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:40.343 #37 NEW cov: 12158 ft: 14989 corp: 20/460b lim: 35 exec/s: 0 rss: 72Mb L: 34/34 MS: 1 CopyPart- 00:06:40.343 [2024-05-16 20:05:27.455050] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000db SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.343 [2024-05-16 20:05:27.455074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.343 [2024-05-16 20:05:27.455146] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000db SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.343 [2024-05-16 20:05:27.455159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.343 [2024-05-16 20:05:27.455212] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000db SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.343 [2024-05-16 20:05:27.455227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.343 [2024-05-16 20:05:27.455280] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:000000db SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.343 [2024-05-16 20:05:27.455291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:40.343 #38 NEW cov: 12165 ft: 15011 corp: 21/490b lim: 35 exec/s: 38 rss: 72Mb L: 30/34 MS: 1 InsertByte- 00:06:40.602 [2024-05-16 20:05:27.494838] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.602 [2024-05-16 20:05:27.494864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.602 [2024-05-16 20:05:27.494918] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.602 [2024-05-16 20:05:27.494932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.602 #39 NEW cov: 12165 ft: 15061 corp: 22/505b lim: 35 exec/s: 39 rss: 72Mb L: 15/34 MS: 1 CopyPart- 00:06:40.602 [2024-05-16 20:05:27.545138] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.602 [2024-05-16 20:05:27.545163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.602 [2024-05-16 20:05:27.545216] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.602 [2024-05-16 20:05:27.545229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.602 #45 NEW cov: 12165 ft: 15090 corp: 23/532b lim: 35 exec/s: 45 rss: 72Mb L: 27/34 MS: 1 ShuffleBytes- 00:06:40.602 [2024-05-16 20:05:27.595157] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.602 [2024-05-16 20:05:27.595182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.602 [2024-05-16 20:05:27.595239] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.602 [2024-05-16 20:05:27.595249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.602 #46 NEW cov: 12165 ft: 15122 corp: 24/546b lim: 35 exec/s: 46 rss: 72Mb L: 14/34 MS: 1 ChangeBinInt- 00:06:40.602 [2024-05-16 20:05:27.635197] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.602 [2024-05-16 20:05:27.635221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.602 [2024-05-16 20:05:27.635293] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.602 [2024-05-16 20:05:27.635304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.602 #47 NEW cov: 12165 ft: 15125 corp: 25/560b lim: 35 exec/s: 47 rss: 72Mb L: 14/34 MS: 1 ChangeBinInt- 00:06:40.602 [2024-05-16 20:05:27.685389] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.602 [2024-05-16 20:05:27.685413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.602 [2024-05-16 20:05:27.685470] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000a1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.603 [2024-05-16 20:05:27.685487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.603 #48 NEW cov: 12165 ft: 15130 corp: 26/576b lim: 35 exec/s: 48 rss: 72Mb L: 16/34 MS: 1 EraseBytes- 00:06:40.603 [2024-05-16 20:05:27.725488] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.603 [2024-05-16 20:05:27.725513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.603 [2024-05-16 20:05:27.725568] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.603 [2024-05-16 20:05:27.725582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.862 #49 NEW cov: 12165 ft: 15137 corp: 27/591b lim: 35 exec/s: 49 rss: 73Mb L: 15/34 MS: 1 ShuffleBytes- 00:06:40.862 [2024-05-16 20:05:27.775973] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.862 [2024-05-16 20:05:27.775997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.862 [2024-05-16 20:05:27.776068] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.862 [2024-05-16 20:05:27.776081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.862 [2024-05-16 20:05:27.776137] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.862 [2024-05-16 20:05:27.776149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.862 [2024-05-16 20:05:27.776203] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.862 [2024-05-16 20:05:27.776216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:40.862 #50 NEW cov: 12165 ft: 15153 corp: 28/621b lim: 35 exec/s: 50 rss: 73Mb L: 30/34 MS: 1 CopyPart- 00:06:40.862 [2024-05-16 20:05:27.825615] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000db SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.862 [2024-05-16 20:05:27.825640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.862 #51 NEW cov: 12165 ft: 15180 corp: 29/629b lim: 35 exec/s: 51 rss: 73Mb L: 8/34 MS: 1 ChangeBit- 00:06:40.862 [2024-05-16 20:05:27.875766] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.862 [2024-05-16 20:05:27.875792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.862 #52 NEW cov: 12165 ft: 15192 corp: 30/641b lim: 35 exec/s: 52 rss: 73Mb L: 12/34 MS: 1 EraseBytes- 00:06:40.862 [2024-05-16 20:05:27.916183] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.862 [2024-05-16 20:05:27.916209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.862 [2024-05-16 20:05:27.916263] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.862 [2024-05-16 20:05:27.916276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.862 [2024-05-16 20:05:27.966542] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.862 [2024-05-16 20:05:27.966570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.862 [2024-05-16 20:05:27.966642] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.862 [2024-05-16 20:05:27.966655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.862 [2024-05-16 20:05:27.966709] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:000000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.862 [2024-05-16 20:05:27.966720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.862 [2024-05-16 20:05:27.966774] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.862 [2024-05-16 20:05:27.966786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:40.862 #54 NEW cov: 12165 ft: 15205 corp: 31/669b lim: 35 exec/s: 54 rss: 73Mb L: 28/34 MS: 2 CMP-InsertByte- DE: "\000\020"- 00:06:40.862 [2024-05-16 20:05:28.006621] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.862 [2024-05-16 20:05:28.006648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.862 [2024-05-16 20:05:28.006704] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000a1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.862 [2024-05-16 20:05:28.006718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.862 [2024-05-16 20:05:28.006772] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000a1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.862 [2024-05-16 20:05:28.006784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.862 [2024-05-16 20:05:28.006839] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000a1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.862 [2024-05-16 20:05:28.006852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:41.120 #55 NEW cov: 12165 ft: 15213 corp: 32/700b lim: 35 exec/s: 55 rss: 73Mb L: 31/34 MS: 1 EraseBytes- 00:06:41.120 [2024-05-16 20:05:28.046377] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.120 [2024-05-16 20:05:28.046402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.120 [2024-05-16 20:05:28.046477] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.120 [2024-05-16 20:05:28.046491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.120 #56 NEW cov: 12165 ft: 15263 corp: 33/716b lim: 35 exec/s: 56 rss: 73Mb L: 16/34 MS: 1 InsertByte- 00:06:41.120 [2024-05-16 20:05:28.086805] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.120 [2024-05-16 20:05:28.086830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.120 [2024-05-16 20:05:28.086887] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.120 [2024-05-16 20:05:28.086903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.120 [2024-05-16 20:05:28.086955] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.120 [2024-05-16 20:05:28.086968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.120 [2024-05-16 20:05:28.087025] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.120 [2024-05-16 20:05:28.087037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:41.120 #57 NEW cov: 12165 ft: 15279 corp: 34/745b lim: 35 exec/s: 57 rss: 73Mb L: 29/34 MS: 1 InsertByte- 00:06:41.120 [2024-05-16 20:05:28.126959] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000021 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.120 [2024-05-16 20:05:28.126984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.120 [2024-05-16 20:05:28.127057] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.120 [2024-05-16 20:05:28.127072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.120 [2024-05-16 20:05:28.127125] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.120 [2024-05-16 20:05:28.127137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.120 [2024-05-16 20:05:28.127194] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.120 [2024-05-16 20:05:28.127206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:41.120 #58 NEW cov: 12165 ft: 15288 corp: 35/773b lim: 35 exec/s: 58 rss: 73Mb L: 28/34 MS: 1 InsertByte- 00:06:41.120 [2024-05-16 20:05:28.167049] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.120 [2024-05-16 20:05:28.167076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.120 [2024-05-16 20:05:28.167149] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000a1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.120 [2024-05-16 20:05:28.167163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.120 [2024-05-16 20:05:28.167220] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000a1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.120 [2024-05-16 20:05:28.167234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.120 [2024-05-16 20:05:28.167287] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000a1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.120 [2024-05-16 20:05:28.167301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:41.120 #59 NEW cov: 12165 ft: 15353 corp: 36/805b lim: 35 exec/s: 59 rss: 73Mb L: 32/34 MS: 1 ChangeBinInt- 00:06:41.120 [2024-05-16 20:05:28.207225] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.120 [2024-05-16 20:05:28.207250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.120 [2024-05-16 20:05:28.207304] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.120 [2024-05-16 20:05:28.207320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.120 [2024-05-16 20:05:28.207374] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.120 [2024-05-16 20:05:28.207387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.120 [2024-05-16 20:05:28.207441] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.120 [2024-05-16 20:05:28.207458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:41.120 #60 NEW cov: 12165 ft: 15392 corp: 37/833b lim: 35 exec/s: 60 rss: 73Mb L: 28/34 MS: 1 InsertByte- 00:06:41.120 [2024-05-16 20:05:28.246978] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.120 [2024-05-16 20:05:28.247003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.120 [2024-05-16 20:05:28.247059] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000db SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.120 [2024-05-16 20:05:28.247073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.378 #61 NEW cov: 12165 ft: 15401 corp: 38/851b lim: 35 exec/s: 61 rss: 73Mb L: 18/34 MS: 1 ChangeBinInt- 00:06:41.378 [2024-05-16 20:05:28.297463] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.378 [2024-05-16 20:05:28.297488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.378 [2024-05-16 20:05:28.297558] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.378 [2024-05-16 20:05:28.297571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.378 [2024-05-16 20:05:28.297628] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.378 [2024-05-16 20:05:28.297641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.378 [2024-05-16 20:05:28.297693] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.378 [2024-05-16 20:05:28.297705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:41.378 #62 NEW cov: 12165 ft: 15402 corp: 39/879b lim: 35 exec/s: 62 rss: 74Mb L: 28/34 MS: 1 ChangeByte- 00:06:41.378 [2024-05-16 20:05:28.347612] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.378 [2024-05-16 20:05:28.347638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.378 [2024-05-16 20:05:28.347691] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.378 [2024-05-16 20:05:28.347704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.378 [2024-05-16 20:05:28.347758] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:000000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.378 [2024-05-16 20:05:28.347772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.378 [2024-05-16 20:05:28.347825] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.378 [2024-05-16 20:05:28.347838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:41.378 #63 NEW cov: 12165 ft: 15413 corp: 40/907b lim: 35 exec/s: 63 rss: 74Mb L: 28/34 MS: 1 ShuffleBytes- 00:06:41.378 [2024-05-16 20:05:28.397748] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000db SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.378 [2024-05-16 20:05:28.397772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.378 [2024-05-16 20:05:28.397843] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000db SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.378 [2024-05-16 20:05:28.397856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.378 [2024-05-16 20:05:28.397912] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:8000002b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.378 [2024-05-16 20:05:28.397924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.378 [2024-05-16 20:05:28.397977] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000db SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.378 [2024-05-16 20:05:28.397989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:41.378 #64 NEW cov: 12165 ft: 15424 corp: 41/938b lim: 35 exec/s: 64 rss: 74Mb L: 31/34 MS: 1 InsertByte- 00:06:41.378 [2024-05-16 20:05:28.437861] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.378 [2024-05-16 20:05:28.437886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.378 [2024-05-16 20:05:28.437956] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.378 [2024-05-16 20:05:28.437969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.378 [2024-05-16 20:05:28.438025] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:000000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.378 [2024-05-16 20:05:28.438036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.378 [2024-05-16 20:05:28.438091] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.378 [2024-05-16 20:05:28.438104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:41.378 #65 NEW cov: 12165 ft: 15425 corp: 42/967b lim: 35 exec/s: 32 rss: 74Mb L: 29/34 MS: 1 InsertByte- 00:06:41.378 #65 DONE cov: 12165 ft: 15425 corp: 42/967b lim: 35 exec/s: 32 rss: 74Mb 00:06:41.378 ###### Recommended dictionary. ###### 00:06:41.378 "\021\000" # Uses: 0 00:06:41.378 "\000\020" # Uses: 0 00:06:41.378 ###### End of recommended dictionary. ###### 00:06:41.378 Done 65 runs in 2 second(s) 00:06:41.379 [2024-05-16 20:05:28.472705] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:41.637 20:05:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_14.conf /var/tmp/suppress_nvmf_fuzz 00:06:41.637 20:05:28 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:41.637 20:05:28 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:41.637 20:05:28 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 15 1 0x1 00:06:41.637 20:05:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=15 00:06:41.637 20:05:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:41.637 20:05:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:41.637 20:05:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:06:41.637 20:05:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_15.conf 00:06:41.637 20:05:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:41.637 20:05:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:41.637 20:05:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 15 00:06:41.637 20:05:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4415 00:06:41.637 20:05:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:06:41.637 20:05:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' 00:06:41.637 20:05:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4415"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:41.637 20:05:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:41.637 20:05:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:41.637 20:05:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' -c /tmp/fuzz_json_15.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 -Z 15 00:06:41.637 [2024-05-16 20:05:28.640248] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:41.637 [2024-05-16 20:05:28.640309] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1669270 ] 00:06:41.637 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.897 [2024-05-16 20:05:28.798777] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.897 [2024-05-16 20:05:28.863429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.897 [2024-05-16 20:05:28.921835] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:41.897 [2024-05-16 20:05:28.937805] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:41.897 [2024-05-16 20:05:28.938157] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4415 *** 00:06:41.897 INFO: Running with entropic power schedule (0xFF, 100). 00:06:41.897 INFO: Seed: 932936022 00:06:41.897 INFO: Loaded 1 modules (357283 inline 8-bit counters): 357283 [0x299c0cc, 0x29f346f), 00:06:41.897 INFO: Loaded 1 PC tables (357283 PCs): 357283 [0x29f3470,0x2f66ea0), 00:06:41.897 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:06:41.897 INFO: A corpus is not provided, starting from an empty corpus 00:06:41.897 #2 INITED exec/s: 0 rss: 64Mb 00:06:41.897 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:41.897 This may also happen if the target rejected all inputs we tried so far 00:06:41.897 [2024-05-16 20:05:29.005141] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:41.897 [2024-05-16 20:05:29.005460] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:41.897 [2024-05-16 20:05:29.006017] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.897 [2024-05-16 20:05:29.006065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.897 [2024-05-16 20:05:29.006172] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:5 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.897 [2024-05-16 20:05:29.006187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.897 [2024-05-16 20:05:29.006282] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:6 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.897 [2024-05-16 20:05:29.006297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.155 NEW_FUNC[1/686]: 0x498330 in fuzz_admin_get_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:460 00:06:42.155 NEW_FUNC[2/686]: 0x4bf420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:42.155 #4 NEW cov: 11828 ft: 11829 corp: 2/26b lim: 35 exec/s: 0 rss: 71Mb L: 25/25 MS: 2 InsertByte-InsertRepeatedBytes- 00:06:42.155 [2024-05-16 20:05:29.165553] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:42.155 [2024-05-16 20:05:29.165860] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:42.155 [2024-05-16 20:05:29.166137] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:42.155 [2024-05-16 20:05:29.166634] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.155 [2024-05-16 20:05:29.166681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.155 [2024-05-16 20:05:29.166789] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:5 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.155 [2024-05-16 20:05:29.166811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.155 [2024-05-16 20:05:29.166907] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:6 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.155 [2024-05-16 20:05:29.166926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.155 [2024-05-16 20:05:29.167019] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:7 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.155 [2024-05-16 20:05:29.167037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:42.155 #5 NEW cov: 11958 ft: 12852 corp: 3/57b lim: 35 exec/s: 0 rss: 71Mb L: 31/31 MS: 1 CopyPart- 00:06:42.156 [2024-05-16 20:05:29.235618] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:42.156 [2024-05-16 20:05:29.235909] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:42.156 [2024-05-16 20:05:29.236172] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:42.156 [2024-05-16 20:05:29.236630] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.156 [2024-05-16 20:05:29.236657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.156 [2024-05-16 20:05:29.236743] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:5 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.156 [2024-05-16 20:05:29.236756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.156 [2024-05-16 20:05:29.236845] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:6 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.156 [2024-05-16 20:05:29.236858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.156 [2024-05-16 20:05:29.236944] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:7 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.156 [2024-05-16 20:05:29.236955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:42.156 #6 NEW cov: 11964 ft: 13082 corp: 4/88b lim: 35 exec/s: 0 rss: 71Mb L: 31/31 MS: 1 ChangeByte- 00:06:42.156 [2024-05-16 20:05:29.295860] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:42.156 [2024-05-16 20:05:29.296142] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:42.156 [2024-05-16 20:05:29.296404] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:42.156 [2024-05-16 20:05:29.296895] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.156 [2024-05-16 20:05:29.296922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.156 [2024-05-16 20:05:29.297009] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:5 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.156 [2024-05-16 20:05:29.297023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.156 [2024-05-16 20:05:29.297106] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:6 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.156 [2024-05-16 20:05:29.297119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.156 [2024-05-16 20:05:29.297209] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:7 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.156 [2024-05-16 20:05:29.297222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:42.415 #7 NEW cov: 12049 ft: 13373 corp: 5/119b lim: 35 exec/s: 0 rss: 71Mb L: 31/31 MS: 1 CMP- DE: "\010\000"- 00:06:42.415 [2024-05-16 20:05:29.346548] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.415 [2024-05-16 20:05:29.346574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.415 NEW_FUNC[1/1]: 0x4b8780 in feat_async_event_cfg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:346 00:06:42.415 #11 NEW cov: 12153 ft: 13869 corp: 6/138b lim: 35 exec/s: 0 rss: 71Mb L: 19/31 MS: 4 CopyPart-CopyPart-ChangeBit-InsertRepeatedBytes- 00:06:42.415 [2024-05-16 20:05:29.396303] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:42.415 [2024-05-16 20:05:29.396595] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:42.415 [2024-05-16 20:05:29.396851] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:42.415 [2024-05-16 20:05:29.397347] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.415 [2024-05-16 20:05:29.397373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.415 [2024-05-16 20:05:29.397459] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:5 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.415 [2024-05-16 20:05:29.397476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.415 [2024-05-16 20:05:29.397579] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:6 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.415 [2024-05-16 20:05:29.397593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.415 [2024-05-16 20:05:29.397673] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:7 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.415 [2024-05-16 20:05:29.397687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:42.415 #12 NEW cov: 12153 ft: 13992 corp: 7/169b lim: 35 exec/s: 0 rss: 71Mb L: 31/31 MS: 1 ChangeByte- 00:06:42.415 [2024-05-16 20:05:29.446517] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:42.415 [2024-05-16 20:05:29.446812] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:42.415 [2024-05-16 20:05:29.447054] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:42.415 [2024-05-16 20:05:29.447531] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000004ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.415 [2024-05-16 20:05:29.447559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.415 [2024-05-16 20:05:29.447646] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:5 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.415 [2024-05-16 20:05:29.447660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.415 [2024-05-16 20:05:29.447749] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:6 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.415 [2024-05-16 20:05:29.447762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.415 [2024-05-16 20:05:29.447848] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:7 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.415 [2024-05-16 20:05:29.447861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:42.415 #18 NEW cov: 12153 ft: 14106 corp: 8/200b lim: 35 exec/s: 0 rss: 72Mb L: 31/31 MS: 1 ShuffleBytes- 00:06:42.415 [2024-05-16 20:05:29.517430] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.415 [2024-05-16 20:05:29.517459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.415 #19 NEW cov: 12153 ft: 14155 corp: 9/219b lim: 35 exec/s: 0 rss: 72Mb L: 19/31 MS: 1 ChangeByte- 00:06:42.674 [2024-05-16 20:05:29.587144] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:42.674 [2024-05-16 20:05:29.587708] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:42.674 [2024-05-16 20:05:29.588229] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.674 [2024-05-16 20:05:29.588256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.674 [2024-05-16 20:05:29.588349] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:5 cdw10:00000083 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.674 [2024-05-16 20:05:29.588365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.674 [2024-05-16 20:05:29.588458] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.674 [2024-05-16 20:05:29.588471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.674 [2024-05-16 20:05:29.588555] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:7 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.674 [2024-05-16 20:05:29.588569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:42.674 #20 NEW cov: 12153 ft: 14222 corp: 10/250b lim: 35 exec/s: 0 rss: 72Mb L: 31/31 MS: 1 ChangeBinInt- 00:06:42.674 [2024-05-16 20:05:29.657524] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:42.674 [2024-05-16 20:05:29.657830] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:42.674 [2024-05-16 20:05:29.658096] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:42.674 [2024-05-16 20:05:29.658568] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.674 [2024-05-16 20:05:29.658596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.674 [2024-05-16 20:05:29.658681] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:5 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.674 [2024-05-16 20:05:29.658695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.674 [2024-05-16 20:05:29.658782] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:6 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.674 [2024-05-16 20:05:29.658797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.674 [2024-05-16 20:05:29.658894] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:7 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.674 [2024-05-16 20:05:29.658909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:42.674 #21 NEW cov: 12153 ft: 14389 corp: 11/281b lim: 35 exec/s: 0 rss: 72Mb L: 31/31 MS: 1 ShuffleBytes- 00:06:42.674 [2024-05-16 20:05:29.707737] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:42.674 [2024-05-16 20:05:29.707994] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:42.674 [2024-05-16 20:05:29.708269] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:42.674 [2024-05-16 20:05:29.708792] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.674 [2024-05-16 20:05:29.708818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.674 [2024-05-16 20:05:29.708902] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:5 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.674 [2024-05-16 20:05:29.708916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.674 [2024-05-16 20:05:29.709001] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:6 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.674 [2024-05-16 20:05:29.709015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.674 [2024-05-16 20:05:29.709101] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:7 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.674 [2024-05-16 20:05:29.709118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:42.674 #22 NEW cov: 12153 ft: 14441 corp: 12/312b lim: 35 exec/s: 0 rss: 72Mb L: 31/31 MS: 1 ChangeByte- 00:06:42.674 [2024-05-16 20:05:29.767946] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:42.674 [2024-05-16 20:05:29.768237] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:42.674 [2024-05-16 20:05:29.768525] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:42.675 [2024-05-16 20:05:29.769024] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.675 [2024-05-16 20:05:29.769051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.675 [2024-05-16 20:05:29.769136] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:5 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.675 [2024-05-16 20:05:29.769149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.675 [2024-05-16 20:05:29.769232] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:6 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.675 [2024-05-16 20:05:29.769245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.675 [2024-05-16 20:05:29.769334] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:7 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.675 [2024-05-16 20:05:29.769346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:42.675 #23 NEW cov: 12153 ft: 14516 corp: 13/346b lim: 35 exec/s: 0 rss: 72Mb L: 34/34 MS: 1 InsertRepeatedBytes- 00:06:42.934 [2024-05-16 20:05:29.828301] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:42.934 [2024-05-16 20:05:29.828596] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:42.934 [2024-05-16 20:05:29.828891] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:42.934 [2024-05-16 20:05:29.829153] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:42.934 [2024-05-16 20:05:29.829641] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.934 [2024-05-16 20:05:29.829667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.934 [2024-05-16 20:05:29.829755] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:5 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.934 [2024-05-16 20:05:29.829769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.934 [2024-05-16 20:05:29.829862] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:6 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.934 [2024-05-16 20:05:29.829875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.934 [2024-05-16 20:05:29.829962] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:7 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.934 [2024-05-16 20:05:29.829975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:42.934 [2024-05-16 20:05:29.830065] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:8 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.934 [2024-05-16 20:05:29.830081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:42.934 #24 NEW cov: 12153 ft: 14651 corp: 14/381b lim: 35 exec/s: 0 rss: 72Mb L: 35/35 MS: 1 CrossOver- 00:06:42.934 [2024-05-16 20:05:29.878433] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:42.934 [2024-05-16 20:05:29.878725] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:42.934 [2024-05-16 20:05:29.878987] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:42.934 [2024-05-16 20:05:29.879485] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000004ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.934 [2024-05-16 20:05:29.879511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.934 [2024-05-16 20:05:29.879590] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:5 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.934 [2024-05-16 20:05:29.879603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.934 [2024-05-16 20:05:29.879698] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:6 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.934 [2024-05-16 20:05:29.879712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.934 [2024-05-16 20:05:29.879801] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:7 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.934 [2024-05-16 20:05:29.879814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:42.934 NEW_FUNC[1/1]: 0x1a6ef60 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:42.934 #25 NEW cov: 12176 ft: 14708 corp: 15/412b lim: 35 exec/s: 0 rss: 72Mb L: 31/35 MS: 1 ChangeBit- 00:06:42.934 [2024-05-16 20:05:29.948585] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:42.934 [2024-05-16 20:05:29.949108] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.934 [2024-05-16 20:05:29.949133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.934 [2024-05-16 20:05:29.949218] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:5 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.934 [2024-05-16 20:05:29.949230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.934 #26 NEW cov: 12176 ft: 14898 corp: 16/426b lim: 35 exec/s: 0 rss: 72Mb L: 14/35 MS: 1 EraseBytes- 00:06:42.934 [2024-05-16 20:05:29.998952] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:42.934 [2024-05-16 20:05:29.999247] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:42.934 [2024-05-16 20:05:29.999514] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:42.934 [2024-05-16 20:05:29.999991] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.934 [2024-05-16 20:05:30.000016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.934 [2024-05-16 20:05:30.000097] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:5 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.934 [2024-05-16 20:05:30.000116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.934 [2024-05-16 20:05:30.000202] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:6 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.934 [2024-05-16 20:05:30.000216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.934 [2024-05-16 20:05:30.000314] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:7 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.934 [2024-05-16 20:05:30.000330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:42.934 #27 NEW cov: 12176 ft: 14927 corp: 17/457b lim: 35 exec/s: 27 rss: 72Mb L: 31/35 MS: 1 ChangeByte- 00:06:42.934 [2024-05-16 20:05:30.049386] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:42.934 [2024-05-16 20:05:30.049706] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:42.934 [2024-05-16 20:05:30.049997] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:42.934 [2024-05-16 20:05:30.050469] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000004ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.934 [2024-05-16 20:05:30.050500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.934 [2024-05-16 20:05:30.050590] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:5 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.934 [2024-05-16 20:05:30.050603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.934 [2024-05-16 20:05:30.050693] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:6 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.934 [2024-05-16 20:05:30.050707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.934 [2024-05-16 20:05:30.050788] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:7 cdw10:00000383 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.934 [2024-05-16 20:05:30.050801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.197 #28 NEW cov: 12176 ft: 14952 corp: 18/488b lim: 35 exec/s: 28 rss: 72Mb L: 31/35 MS: 1 ChangeBinInt- 00:06:43.197 [2024-05-16 20:05:30.119710] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:43.197 [2024-05-16 20:05:30.119975] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:43.197 [2024-05-16 20:05:30.120249] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:43.197 [2024-05-16 20:05:30.120746] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000004ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.197 [2024-05-16 20:05:30.120775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.197 [2024-05-16 20:05:30.120868] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:5 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.197 [2024-05-16 20:05:30.120881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.197 [2024-05-16 20:05:30.120962] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:6 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.197 [2024-05-16 20:05:30.120974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.197 [2024-05-16 20:05:30.121068] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:7 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.197 [2024-05-16 20:05:30.121080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.197 #34 NEW cov: 12176 ft: 14976 corp: 19/519b lim: 35 exec/s: 34 rss: 72Mb L: 31/35 MS: 1 ChangeByte- 00:06:43.197 [2024-05-16 20:05:30.169958] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:43.197 [2024-05-16 20:05:30.170259] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:43.197 [2024-05-16 20:05:30.170547] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:43.197 [2024-05-16 20:05:30.171043] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.197 [2024-05-16 20:05:30.171070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.197 [2024-05-16 20:05:30.171157] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:5 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.197 [2024-05-16 20:05:30.171170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.197 [2024-05-16 20:05:30.171261] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:6 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.197 [2024-05-16 20:05:30.171276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.197 [2024-05-16 20:05:30.171370] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:7 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.197 [2024-05-16 20:05:30.171383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.197 #35 NEW cov: 12176 ft: 14981 corp: 20/550b lim: 35 exec/s: 35 rss: 72Mb L: 31/35 MS: 1 ChangeBit- 00:06:43.197 [2024-05-16 20:05:30.220150] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:43.197 [2024-05-16 20:05:30.220715] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:43.197 [2024-05-16 20:05:30.221352] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:5 cdw10:00000083 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.197 [2024-05-16 20:05:30.221377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.197 [2024-05-16 20:05:30.221461] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.197 [2024-05-16 20:05:30.221475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.197 [2024-05-16 20:05:30.221577] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:7 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.197 [2024-05-16 20:05:30.221591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.197 NEW_FUNC[1/1]: 0x4b82b0 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:06:43.197 #36 NEW cov: 12190 ft: 15095 corp: 21/581b lim: 35 exec/s: 36 rss: 72Mb L: 31/35 MS: 1 ShuffleBytes- 00:06:43.197 [2024-05-16 20:05:30.280538] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:43.197 [2024-05-16 20:05:30.280814] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:43.197 [2024-05-16 20:05:30.281354] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:43.197 [2024-05-16 20:05:30.281873] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.197 [2024-05-16 20:05:30.281899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.197 [2024-05-16 20:05:30.281988] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:5 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.197 [2024-05-16 20:05:30.282002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.197 [2024-05-16 20:05:30.282091] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:6 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.197 [2024-05-16 20:05:30.282104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.197 [2024-05-16 20:05:30.282188] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007fb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.197 [2024-05-16 20:05:30.282200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.197 [2024-05-16 20:05:30.282284] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:8 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.197 [2024-05-16 20:05:30.282297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:43.197 #37 NEW cov: 12190 ft: 15115 corp: 22/616b lim: 35 exec/s: 37 rss: 72Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:06:43.197 [2024-05-16 20:05:30.330686] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:43.197 [2024-05-16 20:05:30.330993] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:43.197 [2024-05-16 20:05:30.331254] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:43.197 [2024-05-16 20:05:30.331544] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:43.197 [2024-05-16 20:05:30.332045] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.197 [2024-05-16 20:05:30.332071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.197 [2024-05-16 20:05:30.332161] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:5 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.197 [2024-05-16 20:05:30.332175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.197 [2024-05-16 20:05:30.332268] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:6 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.197 [2024-05-16 20:05:30.332283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.198 [2024-05-16 20:05:30.332370] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:7 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.198 [2024-05-16 20:05:30.332383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.198 [2024-05-16 20:05:30.332483] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:8 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.198 [2024-05-16 20:05:30.332496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:43.457 #38 NEW cov: 12190 ft: 15130 corp: 23/651b lim: 35 exec/s: 38 rss: 73Mb L: 35/35 MS: 1 ChangeByte- 00:06:43.457 [2024-05-16 20:05:30.390958] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:43.457 [2024-05-16 20:05:30.391232] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:43.457 [2024-05-16 20:05:30.391707] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000004ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.457 [2024-05-16 20:05:30.391732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.457 [2024-05-16 20:05:30.391825] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:5 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.457 [2024-05-16 20:05:30.391838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.457 [2024-05-16 20:05:30.391933] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:6 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.457 [2024-05-16 20:05:30.391945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.457 #39 NEW cov: 12190 ft: 15168 corp: 24/674b lim: 35 exec/s: 39 rss: 73Mb L: 23/35 MS: 1 EraseBytes- 00:06:43.457 [2024-05-16 20:05:30.451380] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:43.457 [2024-05-16 20:05:30.451670] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:43.457 [2024-05-16 20:05:30.451962] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:43.457 [2024-05-16 20:05:30.452427] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000004ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.457 [2024-05-16 20:05:30.452451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.457 [2024-05-16 20:05:30.452545] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:5 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.457 [2024-05-16 20:05:30.452558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.457 [2024-05-16 20:05:30.452648] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:6 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.457 [2024-05-16 20:05:30.452661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.457 [2024-05-16 20:05:30.452749] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:7 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.457 [2024-05-16 20:05:30.452763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.457 #40 NEW cov: 12190 ft: 15175 corp: 25/705b lim: 35 exec/s: 40 rss: 73Mb L: 31/35 MS: 1 ChangeBinInt- 00:06:43.457 [2024-05-16 20:05:30.501701] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:43.457 [2024-05-16 20:05:30.502234] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.457 [2024-05-16 20:05:30.502260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.457 [2024-05-16 20:05:30.502344] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:5 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.457 [2024-05-16 20:05:30.502358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.457 #41 NEW cov: 12190 ft: 15181 corp: 26/725b lim: 35 exec/s: 41 rss: 73Mb L: 20/35 MS: 1 EraseBytes- 00:06:43.457 [2024-05-16 20:05:30.552397] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.457 [2024-05-16 20:05:30.552421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.457 #42 NEW cov: 12190 ft: 15188 corp: 27/745b lim: 35 exec/s: 42 rss: 73Mb L: 20/35 MS: 1 InsertByte- 00:06:43.457 [2024-05-16 20:05:30.602671] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.457 [2024-05-16 20:05:30.602695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.716 #43 NEW cov: 12190 ft: 15194 corp: 28/765b lim: 35 exec/s: 43 rss: 73Mb L: 20/35 MS: 1 ChangeByte- 00:06:43.716 [2024-05-16 20:05:30.662176] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:43.716 [2024-05-16 20:05:30.662491] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:43.716 [2024-05-16 20:05:30.663004] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.716 [2024-05-16 20:05:30.663031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.716 [2024-05-16 20:05:30.663129] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:5 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.716 [2024-05-16 20:05:30.663142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.716 [2024-05-16 20:05:30.663233] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:6 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.716 [2024-05-16 20:05:30.663246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.716 #44 NEW cov: 12190 ft: 15215 corp: 29/790b lim: 35 exec/s: 44 rss: 73Mb L: 25/35 MS: 1 ShuffleBytes- 00:06:43.716 [2024-05-16 20:05:30.712539] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:43.716 [2024-05-16 20:05:30.712824] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:43.716 [2024-05-16 20:05:30.713101] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:43.716 [2024-05-16 20:05:30.713578] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.716 [2024-05-16 20:05:30.713605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.716 [2024-05-16 20:05:30.713689] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:5 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.716 [2024-05-16 20:05:30.713704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.716 [2024-05-16 20:05:30.713790] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:6 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.716 [2024-05-16 20:05:30.713803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.716 [2024-05-16 20:05:30.713892] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:7 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.716 [2024-05-16 20:05:30.713907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.716 #45 NEW cov: 12190 ft: 15221 corp: 30/822b lim: 35 exec/s: 45 rss: 73Mb L: 32/35 MS: 1 CrossOver- 00:06:43.716 [2024-05-16 20:05:30.783018] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:43.716 [2024-05-16 20:05:30.783278] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:43.716 [2024-05-16 20:05:30.783548] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:43.716 [2024-05-16 20:05:30.783819] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:43.716 [2024-05-16 20:05:30.784352] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000004ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.716 [2024-05-16 20:05:30.784380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.716 [2024-05-16 20:05:30.784472] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:5 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.716 [2024-05-16 20:05:30.784488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.716 [2024-05-16 20:05:30.784583] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:6 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.716 [2024-05-16 20:05:30.784596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.716 [2024-05-16 20:05:30.784694] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:7 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.716 [2024-05-16 20:05:30.784707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.716 [2024-05-16 20:05:30.784801] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:8 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.716 [2024-05-16 20:05:30.784815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:43.716 #46 NEW cov: 12190 ft: 15247 corp: 31/857b lim: 35 exec/s: 46 rss: 73Mb L: 35/35 MS: 1 CMP- DE: "|\001\000\000"- 00:06:43.716 [2024-05-16 20:05:30.853862] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.716 [2024-05-16 20:05:30.853887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.976 #47 NEW cov: 12190 ft: 15255 corp: 32/876b lim: 35 exec/s: 47 rss: 73Mb L: 19/35 MS: 1 ShuffleBytes- 00:06:43.976 [2024-05-16 20:05:30.923682] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:43.976 [2024-05-16 20:05:30.923979] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:43.976 [2024-05-16 20:05:30.924257] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:43.976 [2024-05-16 20:05:30.924765] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.976 [2024-05-16 20:05:30.924794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.976 [2024-05-16 20:05:30.924883] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:5 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.976 [2024-05-16 20:05:30.924898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.976 [2024-05-16 20:05:30.924997] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:6 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.976 [2024-05-16 20:05:30.925011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.976 [2024-05-16 20:05:30.925106] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:7 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.976 [2024-05-16 20:05:30.925121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.976 #48 NEW cov: 12190 ft: 15287 corp: 33/908b lim: 35 exec/s: 48 rss: 74Mb L: 32/35 MS: 1 CopyPart- 00:06:43.976 [2024-05-16 20:05:30.994068] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:43.976 [2024-05-16 20:05:30.994343] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:43.976 [2024-05-16 20:05:30.994625] ctrlr.c:1880:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:06:43.976 [2024-05-16 20:05:30.995125] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000004ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.976 [2024-05-16 20:05:30.995153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.976 [2024-05-16 20:05:30.995246] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:5 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.976 [2024-05-16 20:05:30.995261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.976 [2024-05-16 20:05:30.995345] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:6 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.976 [2024-05-16 20:05:30.995360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.976 [2024-05-16 20:05:30.995444] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:7 cdw10:00000483 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.976 [2024-05-16 20:05:30.995461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.976 #49 NEW cov: 12190 ft: 15303 corp: 34/941b lim: 35 exec/s: 24 rss: 74Mb L: 33/35 MS: 1 PersAutoDict- DE: "\010\000"- 00:06:43.976 #49 DONE cov: 12190 ft: 15303 corp: 34/941b lim: 35 exec/s: 24 rss: 74Mb 00:06:43.976 ###### Recommended dictionary. ###### 00:06:43.976 "\010\000" # Uses: 3 00:06:43.976 "|\001\000\000" # Uses: 0 00:06:43.976 ###### End of recommended dictionary. ###### 00:06:43.976 Done 49 runs in 2 second(s) 00:06:43.976 [2024-05-16 20:05:31.018685] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:44.235 20:05:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_15.conf /var/tmp/suppress_nvmf_fuzz 00:06:44.235 20:05:31 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:44.235 20:05:31 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:44.235 20:05:31 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 16 1 0x1 00:06:44.235 20:05:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=16 00:06:44.235 20:05:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:44.235 20:05:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:44.236 20:05:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:06:44.236 20:05:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_16.conf 00:06:44.236 20:05:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:44.236 20:05:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:44.236 20:05:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 16 00:06:44.236 20:05:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4416 00:06:44.236 20:05:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:06:44.236 20:05:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' 00:06:44.236 20:05:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4416"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:44.236 20:05:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:44.236 20:05:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:44.236 20:05:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' -c /tmp/fuzz_json_16.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 -Z 16 00:06:44.236 [2024-05-16 20:05:31.176973] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:44.236 [2024-05-16 20:05:31.177031] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1669715 ] 00:06:44.236 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.236 [2024-05-16 20:05:31.345379] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.495 [2024-05-16 20:05:31.410105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.495 [2024-05-16 20:05:31.468503] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:44.495 [2024-05-16 20:05:31.484466] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:44.495 [2024-05-16 20:05:31.484830] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4416 *** 00:06:44.495 INFO: Running with entropic power schedule (0xFF, 100). 00:06:44.495 INFO: Seed: 3479932862 00:06:44.495 INFO: Loaded 1 modules (357283 inline 8-bit counters): 357283 [0x299c0cc, 0x29f346f), 00:06:44.495 INFO: Loaded 1 PC tables (357283 PCs): 357283 [0x29f3470,0x2f66ea0), 00:06:44.495 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:06:44.495 INFO: A corpus is not provided, starting from an empty corpus 00:06:44.495 #2 INITED exec/s: 0 rss: 64Mb 00:06:44.495 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:44.495 This may also happen if the target rejected all inputs we tried so far 00:06:44.495 [2024-05-16 20:05:31.529483] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069867569151 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.495 [2024-05-16 20:05:31.529513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:44.495 [2024-05-16 20:05:31.529543] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.495 [2024-05-16 20:05:31.529558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:44.495 [2024-05-16 20:05:31.529586] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.495 [2024-05-16 20:05:31.529600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:44.495 [2024-05-16 20:05:31.529625] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.495 [2024-05-16 20:05:31.529639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:44.754 NEW_FUNC[1/686]: 0x4997e0 in fuzz_nvm_read_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:519 00:06:44.755 NEW_FUNC[2/686]: 0x4bf420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:44.755 #5 NEW cov: 11908 ft: 11900 corp: 2/93b lim: 105 exec/s: 0 rss: 71Mb L: 92/92 MS: 3 ChangeBit-CopyPart-InsertRepeatedBytes- 00:06:44.755 [2024-05-16 20:05:31.699758] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.755 [2024-05-16 20:05:31.699795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:44.755 #8 NEW cov: 12038 ft: 13035 corp: 3/129b lim: 105 exec/s: 0 rss: 71Mb L: 36/92 MS: 3 ChangeBit-ChangeByte-InsertRepeatedBytes- 00:06:44.755 [2024-05-16 20:05:31.759817] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.755 [2024-05-16 20:05:31.759847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:44.755 #9 NEW cov: 12044 ft: 13229 corp: 4/165b lim: 105 exec/s: 0 rss: 71Mb L: 36/92 MS: 1 ChangeBit- 00:06:44.755 [2024-05-16 20:05:31.840035] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.755 [2024-05-16 20:05:31.840062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:44.755 [2024-05-16 20:05:31.840093] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.755 [2024-05-16 20:05:31.840108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:44.755 #10 NEW cov: 12129 ft: 13814 corp: 5/219b lim: 105 exec/s: 0 rss: 71Mb L: 54/92 MS: 1 CopyPart- 00:06:44.755 [2024-05-16 20:05:31.900276] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:13093571280822973877 len:46518 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.755 [2024-05-16 20:05:31.900304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:44.755 [2024-05-16 20:05:31.900334] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:13093571283691877813 len:46518 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.755 [2024-05-16 20:05:31.900349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:44.755 [2024-05-16 20:05:31.900378] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:13093571283691877813 len:46518 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.755 [2024-05-16 20:05:31.900392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:45.014 #11 NEW cov: 12129 ft: 14143 corp: 6/285b lim: 105 exec/s: 0 rss: 71Mb L: 66/92 MS: 1 InsertRepeatedBytes- 00:06:45.014 [2024-05-16 20:05:31.960367] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:13093571280822973877 len:46518 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.014 [2024-05-16 20:05:31.960394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:45.014 [2024-05-16 20:05:31.960438] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:13093571283691877813 len:46518 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.014 [2024-05-16 20:05:31.960459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:45.014 [2024-05-16 20:05:31.960488] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:4756000998545536437 len:46518 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.014 [2024-05-16 20:05:31.960505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:45.014 #12 NEW cov: 12129 ft: 14307 corp: 7/351b lim: 105 exec/s: 0 rss: 72Mb L: 66/92 MS: 1 ChangeBinInt- 00:06:45.014 [2024-05-16 20:05:32.040611] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:13093571280822973877 len:46518 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.014 [2024-05-16 20:05:32.040640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:45.014 [2024-05-16 20:05:32.040685] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:13093571283691877813 len:46518 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.014 [2024-05-16 20:05:32.040701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:45.014 [2024-05-16 20:05:32.040728] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:13093571283691877813 len:46518 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.014 [2024-05-16 20:05:32.040742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:45.014 #13 NEW cov: 12129 ft: 14408 corp: 8/417b lim: 105 exec/s: 0 rss: 72Mb L: 66/92 MS: 1 ChangeBit- 00:06:45.014 [2024-05-16 20:05:32.100738] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.014 [2024-05-16 20:05:32.100768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:45.014 [2024-05-16 20:05:32.100799] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744069414584575 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.014 [2024-05-16 20:05:32.100814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:45.272 #14 NEW cov: 12129 ft: 14455 corp: 9/471b lim: 105 exec/s: 0 rss: 72Mb L: 54/92 MS: 1 CMP- DE: "\\\000\000\000"- 00:06:45.272 [2024-05-16 20:05:32.181025] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069867569151 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.272 [2024-05-16 20:05:32.181054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:45.272 [2024-05-16 20:05:32.181098] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.272 [2024-05-16 20:05:32.181112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:45.272 [2024-05-16 20:05:32.181139] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.272 [2024-05-16 20:05:32.181152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:45.272 [2024-05-16 20:05:32.181178] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.272 [2024-05-16 20:05:32.181191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:45.272 #15 NEW cov: 12129 ft: 14520 corp: 10/563b lim: 105 exec/s: 0 rss: 72Mb L: 92/92 MS: 1 ShuffleBytes- 00:06:45.272 [2024-05-16 20:05:32.261214] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:13093571280822973877 len:46518 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.272 [2024-05-16 20:05:32.261244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:45.272 [2024-05-16 20:05:32.261277] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:13093571283691877813 len:46518 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.272 [2024-05-16 20:05:32.261292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:45.272 [2024-05-16 20:05:32.261319] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:13093571283691877813 len:46518 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.272 [2024-05-16 20:05:32.261332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:45.272 #16 NEW cov: 12129 ft: 14555 corp: 11/629b lim: 105 exec/s: 0 rss: 72Mb L: 66/92 MS: 1 ChangeByte- 00:06:45.272 [2024-05-16 20:05:32.341324] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.272 [2024-05-16 20:05:32.341354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:45.272 #18 NEW cov: 12129 ft: 14575 corp: 12/666b lim: 105 exec/s: 0 rss: 72Mb L: 37/92 MS: 2 ShuffleBytes-CrossOver- 00:06:45.272 [2024-05-16 20:05:32.391444] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.272 [2024-05-16 20:05:32.391481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:45.530 NEW_FUNC[1/1]: 0x1a6ef60 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:45.530 #19 NEW cov: 12146 ft: 14632 corp: 13/694b lim: 105 exec/s: 0 rss: 72Mb L: 28/92 MS: 1 EraseBytes- 00:06:45.530 [2024-05-16 20:05:32.471743] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.530 [2024-05-16 20:05:32.471771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:45.530 [2024-05-16 20:05:32.471801] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744070958088192 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.530 [2024-05-16 20:05:32.471815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:45.530 #20 NEW cov: 12146 ft: 14667 corp: 14/749b lim: 105 exec/s: 20 rss: 72Mb L: 55/92 MS: 1 InsertByte- 00:06:45.530 [2024-05-16 20:05:32.531937] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.530 [2024-05-16 20:05:32.531964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:45.530 [2024-05-16 20:05:32.531993] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744070958088192 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.530 [2024-05-16 20:05:32.532008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:45.530 [2024-05-16 20:05:32.532035] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.530 [2024-05-16 20:05:32.532049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:45.530 #21 NEW cov: 12146 ft: 14696 corp: 15/812b lim: 105 exec/s: 21 rss: 72Mb L: 63/92 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\377"- 00:06:45.530 [2024-05-16 20:05:32.612149] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:13093571280822973877 len:46518 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.530 [2024-05-16 20:05:32.612176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:45.530 [2024-05-16 20:05:32.612225] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:13093571283691877813 len:46518 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.530 [2024-05-16 20:05:32.612239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:45.530 [2024-05-16 20:05:32.612268] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:13093571283691877813 len:46518 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.530 [2024-05-16 20:05:32.612281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:45.530 #22 NEW cov: 12146 ft: 14714 corp: 16/878b lim: 105 exec/s: 22 rss: 72Mb L: 66/92 MS: 1 ChangeByte- 00:06:45.530 [2024-05-16 20:05:32.662207] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.530 [2024-05-16 20:05:32.662233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:45.530 [2024-05-16 20:05:32.662277] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.530 [2024-05-16 20:05:32.662292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:45.789 #23 NEW cov: 12146 ft: 14745 corp: 17/932b lim: 105 exec/s: 23 rss: 72Mb L: 54/92 MS: 1 ShuffleBytes- 00:06:45.789 [2024-05-16 20:05:32.722350] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18445618173802708991 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.789 [2024-05-16 20:05:32.722377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:45.789 #24 NEW cov: 12146 ft: 14756 corp: 18/960b lim: 105 exec/s: 24 rss: 72Mb L: 28/92 MS: 1 ChangeBit- 00:06:45.789 [2024-05-16 20:05:32.802538] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.789 [2024-05-16 20:05:32.802564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:45.789 #25 NEW cov: 12146 ft: 14769 corp: 19/983b lim: 105 exec/s: 25 rss: 72Mb L: 23/92 MS: 1 EraseBytes- 00:06:45.789 [2024-05-16 20:05:32.882793] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.789 [2024-05-16 20:05:32.882820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:45.789 #26 NEW cov: 12146 ft: 14785 corp: 20/1019b lim: 105 exec/s: 26 rss: 72Mb L: 36/92 MS: 1 ChangeBit- 00:06:45.789 [2024-05-16 20:05:32.932899] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.789 [2024-05-16 20:05:32.932927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:46.048 #27 NEW cov: 12146 ft: 14819 corp: 21/1055b lim: 105 exec/s: 27 rss: 72Mb L: 36/92 MS: 1 ChangeByte- 00:06:46.048 [2024-05-16 20:05:33.013268] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.048 [2024-05-16 20:05:33.013295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:46.048 [2024-05-16 20:05:33.013341] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:1880844497647042559 len:6683 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.048 [2024-05-16 20:05:33.013355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:46.048 [2024-05-16 20:05:33.013387] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:1880844493789993498 len:6683 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.048 [2024-05-16 20:05:33.013401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:46.048 [2024-05-16 20:05:33.013427] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:1880844493789993498 len:6683 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.048 [2024-05-16 20:05:33.013440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:46.048 #28 NEW cov: 12146 ft: 14899 corp: 22/1144b lim: 105 exec/s: 28 rss: 73Mb L: 89/92 MS: 1 InsertRepeatedBytes- 00:06:46.048 [2024-05-16 20:05:33.073311] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:13093571280822973877 len:46518 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.048 [2024-05-16 20:05:33.073337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:46.048 [2024-05-16 20:05:33.073381] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:13093571283691877813 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.048 [2024-05-16 20:05:33.073395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:46.048 [2024-05-16 20:05:33.073423] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:13093571283691877813 len:46518 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.048 [2024-05-16 20:05:33.073437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:46.048 #29 NEW cov: 12146 ft: 14933 corp: 23/1210b lim: 105 exec/s: 29 rss: 73Mb L: 66/92 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\377"- 00:06:46.048 [2024-05-16 20:05:33.153496] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.048 [2024-05-16 20:05:33.153523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:46.307 #30 NEW cov: 12146 ft: 15000 corp: 24/1246b lim: 105 exec/s: 30 rss: 73Mb L: 36/92 MS: 1 ChangeBit- 00:06:46.307 [2024-05-16 20:05:33.233724] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.307 [2024-05-16 20:05:33.233753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:46.307 #31 NEW cov: 12146 ft: 15037 corp: 25/1282b lim: 105 exec/s: 31 rss: 73Mb L: 36/92 MS: 1 ChangeBinInt- 00:06:46.307 [2024-05-16 20:05:33.293966] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65442 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.307 [2024-05-16 20:05:33.293993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:46.307 [2024-05-16 20:05:33.294037] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11646767826930344353 len:41378 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.307 [2024-05-16 20:05:33.294051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:46.307 [2024-05-16 20:05:33.294078] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:11646767826930344353 len:41378 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.307 [2024-05-16 20:05:33.294092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:46.307 [2024-05-16 20:05:33.294117] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744072126332927 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.307 [2024-05-16 20:05:33.294137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:46.307 #32 NEW cov: 12146 ft: 15061 corp: 26/1372b lim: 105 exec/s: 32 rss: 73Mb L: 90/92 MS: 1 InsertRepeatedBytes- 00:06:46.307 [2024-05-16 20:05:33.374123] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:13093571280822973877 len:46518 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.307 [2024-05-16 20:05:33.374149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:46.307 [2024-05-16 20:05:33.374193] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:13093571283691877813 len:46518 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.307 [2024-05-16 20:05:33.374207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:46.307 [2024-05-16 20:05:33.374235] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:13093571283691877813 len:46518 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.307 [2024-05-16 20:05:33.374248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:46.307 #33 NEW cov: 12153 ft: 15076 corp: 27/1438b lim: 105 exec/s: 33 rss: 73Mb L: 66/92 MS: 1 ShuffleBytes- 00:06:46.566 [2024-05-16 20:05:33.454311] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:13093571280822973877 len:46518 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.566 [2024-05-16 20:05:33.454342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:46.566 [2024-05-16 20:05:33.454374] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:13093571283691877813 len:46518 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.566 [2024-05-16 20:05:33.454389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:46.566 #34 NEW cov: 12153 ft: 15101 corp: 28/1495b lim: 105 exec/s: 34 rss: 73Mb L: 57/92 MS: 1 EraseBytes- 00:06:46.566 [2024-05-16 20:05:33.514544] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.566 [2024-05-16 20:05:33.514573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:46.566 [2024-05-16 20:05:33.514602] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:55040 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.566 [2024-05-16 20:05:33.514617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:46.566 [2024-05-16 20:05:33.514644] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446736536041947135 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.566 [2024-05-16 20:05:33.514657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:46.566 #35 NEW cov: 12153 ft: 15105 corp: 29/1562b lim: 105 exec/s: 17 rss: 74Mb L: 67/92 MS: 1 CopyPart- 00:06:46.566 #35 DONE cov: 12153 ft: 15105 corp: 29/1562b lim: 105 exec/s: 17 rss: 74Mb 00:06:46.566 ###### Recommended dictionary. ###### 00:06:46.566 "\\\000\000\000" # Uses: 0 00:06:46.566 "\377\377\377\377\377\377\377\377" # Uses: 1 00:06:46.566 ###### End of recommended dictionary. ###### 00:06:46.566 Done 35 runs in 2 second(s) 00:06:46.566 [2024-05-16 20:05:33.558344] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:46.566 20:05:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_16.conf /var/tmp/suppress_nvmf_fuzz 00:06:46.566 20:05:33 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:46.566 20:05:33 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:46.566 20:05:33 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 17 1 0x1 00:06:46.566 20:05:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=17 00:06:46.566 20:05:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:46.566 20:05:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:46.566 20:05:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:06:46.566 20:05:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_17.conf 00:06:46.566 20:05:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:46.566 20:05:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:46.566 20:05:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 17 00:06:46.566 20:05:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4417 00:06:46.566 20:05:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:06:46.566 20:05:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' 00:06:46.566 20:05:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4417"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:46.566 20:05:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:46.566 20:05:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:46.566 20:05:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' -c /tmp/fuzz_json_17.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 -Z 17 00:06:46.825 [2024-05-16 20:05:33.727788] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:46.825 [2024-05-16 20:05:33.727874] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1670106 ] 00:06:46.825 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.825 [2024-05-16 20:05:33.893091] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.825 [2024-05-16 20:05:33.956686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.083 [2024-05-16 20:05:34.015305] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:47.083 [2024-05-16 20:05:34.031271] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:47.083 [2024-05-16 20:05:34.031616] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4417 *** 00:06:47.083 INFO: Running with entropic power schedule (0xFF, 100). 00:06:47.083 INFO: Seed: 1729983142 00:06:47.083 INFO: Loaded 1 modules (357283 inline 8-bit counters): 357283 [0x299c0cc, 0x29f346f), 00:06:47.083 INFO: Loaded 1 PC tables (357283 PCs): 357283 [0x29f3470,0x2f66ea0), 00:06:47.083 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:06:47.083 INFO: A corpus is not provided, starting from an empty corpus 00:06:47.083 #2 INITED exec/s: 0 rss: 63Mb 00:06:47.083 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:47.083 This may also happen if the target rejected all inputs we tried so far 00:06:47.083 [2024-05-16 20:05:34.077042] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13310591802071890104 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.083 [2024-05-16 20:05:34.077070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:47.083 [2024-05-16 20:05:34.077104] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13310591802206107832 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.083 [2024-05-16 20:05:34.077116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:47.083 [2024-05-16 20:05:34.077168] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:13310591802206107832 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.083 [2024-05-16 20:05:34.077180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:47.083 NEW_FUNC[1/687]: 0x49cb60 in fuzz_nvm_write_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:540 00:06:47.083 NEW_FUNC[2/687]: 0x4bf420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:47.083 #15 NEW cov: 11929 ft: 11930 corp: 2/83b lim: 120 exec/s: 0 rss: 70Mb L: 82/82 MS: 3 ChangeByte-ShuffleBytes-InsertRepeatedBytes- 00:06:47.083 [2024-05-16 20:05:34.227343] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13310591802071890104 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.083 [2024-05-16 20:05:34.227377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:47.083 [2024-05-16 20:05:34.227429] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13310591802206107832 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.083 [2024-05-16 20:05:34.227443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:47.083 [2024-05-16 20:05:34.227494] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:13310591802206107832 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.083 [2024-05-16 20:05:34.227507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:47.342 #16 NEW cov: 12059 ft: 12532 corp: 3/165b lim: 120 exec/s: 0 rss: 70Mb L: 82/82 MS: 1 CopyPart- 00:06:47.342 [2024-05-16 20:05:34.277753] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:72340173647142145 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.342 [2024-05-16 20:05:34.277778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:47.342 [2024-05-16 20:05:34.277842] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:72340172838076673 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.342 [2024-05-16 20:05:34.277853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:47.342 [2024-05-16 20:05:34.277904] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:72340172838076673 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.342 [2024-05-16 20:05:34.277917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:47.342 [2024-05-16 20:05:34.277967] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:72340172838076673 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.342 [2024-05-16 20:05:34.277980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:47.342 [2024-05-16 20:05:34.278033] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:72340172838076673 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.342 [2024-05-16 20:05:34.278046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:47.342 #20 NEW cov: 12065 ft: 13232 corp: 4/285b lim: 120 exec/s: 0 rss: 70Mb L: 120/120 MS: 4 InsertByte-ChangeByte-InsertByte-InsertRepeatedBytes- 00:06:47.342 [2024-05-16 20:05:34.317832] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:72340173647142145 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.342 [2024-05-16 20:05:34.317858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:47.342 [2024-05-16 20:05:34.317908] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:72340172838076673 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.342 [2024-05-16 20:05:34.317921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:47.342 [2024-05-16 20:05:34.317970] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:72340172838076673 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.342 [2024-05-16 20:05:34.317983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:47.342 [2024-05-16 20:05:34.318028] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:72340172838076673 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.342 [2024-05-16 20:05:34.318041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:47.342 [2024-05-16 20:05:34.318091] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:72340172838076673 len:514 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.342 [2024-05-16 20:05:34.318103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:47.342 #21 NEW cov: 12150 ft: 13448 corp: 5/405b lim: 120 exec/s: 0 rss: 70Mb L: 120/120 MS: 1 ChangeBinInt- 00:06:47.342 [2024-05-16 20:05:34.367511] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13310591802071890104 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.342 [2024-05-16 20:05:34.367536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:47.342 [2024-05-16 20:05:34.367589] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13310591802206107832 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.342 [2024-05-16 20:05:34.367604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:47.342 #22 NEW cov: 12150 ft: 13846 corp: 6/464b lim: 120 exec/s: 0 rss: 70Mb L: 59/120 MS: 1 EraseBytes- 00:06:47.342 [2024-05-16 20:05:34.408105] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:72340173647142145 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.342 [2024-05-16 20:05:34.408129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:47.342 [2024-05-16 20:05:34.408192] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:72340172838076673 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.342 [2024-05-16 20:05:34.408203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:47.342 [2024-05-16 20:05:34.408253] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:72340172838076673 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.342 [2024-05-16 20:05:34.408266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:47.342 [2024-05-16 20:05:34.408316] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:72340172838076673 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.342 [2024-05-16 20:05:34.408329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:47.342 [2024-05-16 20:05:34.408379] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:72340172838076673 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.342 [2024-05-16 20:05:34.408394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:47.342 #23 NEW cov: 12150 ft: 13934 corp: 7/584b lim: 120 exec/s: 0 rss: 70Mb L: 120/120 MS: 1 ChangeByte- 00:06:47.342 [2024-05-16 20:05:34.447853] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13310591802071890104 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.342 [2024-05-16 20:05:34.447876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:47.342 [2024-05-16 20:05:34.447926] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13310591802206107832 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.342 [2024-05-16 20:05:34.447939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:47.342 [2024-05-16 20:05:34.447988] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:13310591802206107832 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.342 [2024-05-16 20:05:34.448001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:47.342 #24 NEW cov: 12150 ft: 14008 corp: 8/656b lim: 120 exec/s: 0 rss: 70Mb L: 72/120 MS: 1 EraseBytes- 00:06:47.342 [2024-05-16 20:05:34.488001] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13310591802071890104 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.342 [2024-05-16 20:05:34.488025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:47.342 [2024-05-16 20:05:34.488075] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13310591802206107832 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.342 [2024-05-16 20:05:34.488088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:47.342 [2024-05-16 20:05:34.488138] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:13310591802206107832 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.342 [2024-05-16 20:05:34.488152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:47.601 #25 NEW cov: 12150 ft: 14043 corp: 9/728b lim: 120 exec/s: 0 rss: 70Mb L: 72/120 MS: 1 ShuffleBytes- 00:06:47.601 [2024-05-16 20:05:34.538128] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13310591802071890104 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.601 [2024-05-16 20:05:34.538152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:47.601 [2024-05-16 20:05:34.538206] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13310591802206107832 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.601 [2024-05-16 20:05:34.538219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:47.601 [2024-05-16 20:05:34.538268] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:13310591802206107832 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.601 [2024-05-16 20:05:34.538281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:47.601 #26 NEW cov: 12150 ft: 14085 corp: 10/816b lim: 120 exec/s: 0 rss: 70Mb L: 88/120 MS: 1 InsertRepeatedBytes- 00:06:47.601 [2024-05-16 20:05:34.588281] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13310591802071890104 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.601 [2024-05-16 20:05:34.588305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:47.601 [2024-05-16 20:05:34.588358] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13310591802206107832 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.601 [2024-05-16 20:05:34.588371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:47.601 [2024-05-16 20:05:34.588421] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:13310591802206107832 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.601 [2024-05-16 20:05:34.588433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:47.601 #27 NEW cov: 12150 ft: 14123 corp: 11/889b lim: 120 exec/s: 0 rss: 70Mb L: 73/120 MS: 1 InsertByte- 00:06:47.601 [2024-05-16 20:05:34.628645] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:72340173647142145 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.601 [2024-05-16 20:05:34.628670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:47.601 [2024-05-16 20:05:34.628718] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:72340172838076673 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.601 [2024-05-16 20:05:34.628728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:47.601 [2024-05-16 20:05:34.628777] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:72340172838076673 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.601 [2024-05-16 20:05:34.628790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:47.601 [2024-05-16 20:05:34.628840] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:72340172838076673 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.601 [2024-05-16 20:05:34.628853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:47.601 [2024-05-16 20:05:34.628900] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:72340172838076673 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.601 [2024-05-16 20:05:34.628913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:47.601 #28 NEW cov: 12150 ft: 14194 corp: 12/1009b lim: 120 exec/s: 0 rss: 70Mb L: 120/120 MS: 1 ChangeBit- 00:06:47.601 [2024-05-16 20:05:34.668501] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13310591802071890104 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.601 [2024-05-16 20:05:34.668526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:47.601 [2024-05-16 20:05:34.668591] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13310591802206107832 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.601 [2024-05-16 20:05:34.668605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:47.601 [2024-05-16 20:05:34.668667] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:13310591802206107832 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.601 [2024-05-16 20:05:34.668680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:47.601 #29 NEW cov: 12150 ft: 14242 corp: 13/1081b lim: 120 exec/s: 0 rss: 70Mb L: 72/120 MS: 1 ChangeBit- 00:06:47.601 [2024-05-16 20:05:34.708759] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13310591802071890104 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.601 [2024-05-16 20:05:34.708782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:47.601 [2024-05-16 20:05:34.708853] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13310591802206107832 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.601 [2024-05-16 20:05:34.708866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:47.602 [2024-05-16 20:05:34.708915] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:72340172840632577 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.602 [2024-05-16 20:05:34.708927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:47.602 [2024-05-16 20:05:34.708976] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:13310591802206107832 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.602 [2024-05-16 20:05:34.708988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:47.602 #30 NEW cov: 12150 ft: 14262 corp: 14/1191b lim: 120 exec/s: 0 rss: 70Mb L: 110/120 MS: 1 CrossOver- 00:06:47.860 [2024-05-16 20:05:34.748740] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13310591802071890104 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.860 [2024-05-16 20:05:34.748764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:47.860 [2024-05-16 20:05:34.748815] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13310591802206107832 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.860 [2024-05-16 20:05:34.748828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:47.860 [2024-05-16 20:05:34.748879] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:13310591802206107832 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.860 [2024-05-16 20:05:34.748891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:47.860 #31 NEW cov: 12150 ft: 14341 corp: 15/1280b lim: 120 exec/s: 0 rss: 70Mb L: 89/120 MS: 1 InsertByte- 00:06:47.860 [2024-05-16 20:05:34.798863] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13310591802071888056 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.860 [2024-05-16 20:05:34.798889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:47.860 [2024-05-16 20:05:34.798925] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13310591802206107832 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.860 [2024-05-16 20:05:34.798938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:47.860 [2024-05-16 20:05:34.799005] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:13310591802206107832 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.860 [2024-05-16 20:05:34.799018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:47.860 #32 NEW cov: 12150 ft: 14409 corp: 16/1352b lim: 120 exec/s: 0 rss: 70Mb L: 72/120 MS: 1 ChangeBit- 00:06:47.860 [2024-05-16 20:05:34.849275] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:72340173647142145 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.860 [2024-05-16 20:05:34.849300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:47.860 [2024-05-16 20:05:34.849366] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:72340172838076673 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.860 [2024-05-16 20:05:34.849379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:47.860 [2024-05-16 20:05:34.849431] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:72340172838076673 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.860 [2024-05-16 20:05:34.849445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:47.860 [2024-05-16 20:05:34.849497] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:72340172838076673 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.860 [2024-05-16 20:05:34.849510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:47.860 [2024-05-16 20:05:34.849561] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:72340172838076673 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.860 [2024-05-16 20:05:34.849574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:47.860 #33 NEW cov: 12150 ft: 14437 corp: 17/1472b lim: 120 exec/s: 0 rss: 70Mb L: 120/120 MS: 1 CopyPart- 00:06:47.860 [2024-05-16 20:05:34.889162] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13310591802071890104 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.860 [2024-05-16 20:05:34.889187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:47.860 [2024-05-16 20:05:34.889233] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13310591802206107832 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.860 [2024-05-16 20:05:34.889246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:47.860 [2024-05-16 20:05:34.889296] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:13310591802206107832 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.860 [2024-05-16 20:05:34.889308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:47.860 #34 NEW cov: 12150 ft: 14444 corp: 18/1545b lim: 120 exec/s: 0 rss: 71Mb L: 73/120 MS: 1 CopyPart- 00:06:47.861 [2024-05-16 20:05:34.939425] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13310591802071890104 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.861 [2024-05-16 20:05:34.939450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:47.861 [2024-05-16 20:05:34.939522] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13310591802206107832 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.861 [2024-05-16 20:05:34.939534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:47.861 [2024-05-16 20:05:34.939583] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:13310591802206107832 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.861 [2024-05-16 20:05:34.939596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:47.861 [2024-05-16 20:05:34.939658] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:13310591802206107832 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.861 [2024-05-16 20:05:34.939671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:47.861 NEW_FUNC[1/1]: 0x1a6ef60 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:47.861 #40 NEW cov: 12173 ft: 14448 corp: 19/1656b lim: 120 exec/s: 0 rss: 71Mb L: 111/120 MS: 1 CopyPart- 00:06:47.861 [2024-05-16 20:05:34.979557] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13310591802071890104 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.861 [2024-05-16 20:05:34.979585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:47.861 [2024-05-16 20:05:34.979648] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13310591802206107832 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.861 [2024-05-16 20:05:34.979662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:47.861 [2024-05-16 20:05:34.979726] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:13310591802206107832 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.861 [2024-05-16 20:05:34.979739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:47.861 [2024-05-16 20:05:34.979790] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:13310591802206107832 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.861 [2024-05-16 20:05:34.979803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:48.120 #41 NEW cov: 12173 ft: 14468 corp: 20/1767b lim: 120 exec/s: 0 rss: 71Mb L: 111/120 MS: 1 ChangeBinInt- 00:06:48.120 [2024-05-16 20:05:35.029540] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13310591802071890104 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.120 [2024-05-16 20:05:35.029565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.120 [2024-05-16 20:05:35.029631] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13310591802206107832 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.120 [2024-05-16 20:05:35.029644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.120 [2024-05-16 20:05:35.029694] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:13310591802206107832 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.120 [2024-05-16 20:05:35.029720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:48.120 #42 NEW cov: 12173 ft: 14485 corp: 21/1849b lim: 120 exec/s: 0 rss: 71Mb L: 82/120 MS: 1 ShuffleBytes- 00:06:48.120 [2024-05-16 20:05:35.069924] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:72340173647142145 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.120 [2024-05-16 20:05:35.069949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.120 [2024-05-16 20:05:35.069997] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:72340172838076673 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.120 [2024-05-16 20:05:35.070007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.120 [2024-05-16 20:05:35.070075] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:72340172838076673 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.120 [2024-05-16 20:05:35.070088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:48.120 [2024-05-16 20:05:35.070135] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:72340172871631105 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.120 [2024-05-16 20:05:35.070147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:48.120 [2024-05-16 20:05:35.070198] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:72340172838076673 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.120 [2024-05-16 20:05:35.070213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:48.120 #43 NEW cov: 12173 ft: 14516 corp: 22/1969b lim: 120 exec/s: 43 rss: 71Mb L: 120/120 MS: 1 ChangeBinInt- 00:06:48.120 [2024-05-16 20:05:35.109742] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13310591802071890104 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.120 [2024-05-16 20:05:35.109767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.120 [2024-05-16 20:05:35.109817] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13310591802206107832 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.120 [2024-05-16 20:05:35.109829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.120 [2024-05-16 20:05:35.109880] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:13310591802206107832 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.120 [2024-05-16 20:05:35.109892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:48.120 #44 NEW cov: 12173 ft: 14528 corp: 23/2041b lim: 120 exec/s: 44 rss: 71Mb L: 72/120 MS: 1 ChangeBit- 00:06:48.121 [2024-05-16 20:05:35.160028] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13310591802071890104 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.121 [2024-05-16 20:05:35.160054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.121 [2024-05-16 20:05:35.160103] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13310591802206107832 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.121 [2024-05-16 20:05:35.160115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.121 [2024-05-16 20:05:35.160164] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:72340172838086657 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.121 [2024-05-16 20:05:35.160178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:48.121 [2024-05-16 20:05:35.160229] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:13310591802206107832 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.121 [2024-05-16 20:05:35.160241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:48.121 #45 NEW cov: 12173 ft: 14538 corp: 24/2152b lim: 120 exec/s: 45 rss: 71Mb L: 111/120 MS: 1 InsertByte- 00:06:48.121 [2024-05-16 20:05:35.210043] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13310591802071890104 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.121 [2024-05-16 20:05:35.210066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.121 [2024-05-16 20:05:35.210116] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13310591802206107832 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.121 [2024-05-16 20:05:35.210129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.121 [2024-05-16 20:05:35.210178] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:13310591802206107832 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.121 [2024-05-16 20:05:35.210191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:48.121 #46 NEW cov: 12173 ft: 14547 corp: 25/2241b lim: 120 exec/s: 46 rss: 71Mb L: 89/120 MS: 1 ChangeBinInt- 00:06:48.121 [2024-05-16 20:05:35.260475] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:72340173647142145 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.121 [2024-05-16 20:05:35.260501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.121 [2024-05-16 20:05:35.260549] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:72340172838076673 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.121 [2024-05-16 20:05:35.260559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.121 [2024-05-16 20:05:35.260606] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:72340172838076673 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.121 [2024-05-16 20:05:35.260617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:48.121 [2024-05-16 20:05:35.260663] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:72340172838076673 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.121 [2024-05-16 20:05:35.260675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:48.121 [2024-05-16 20:05:35.260723] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:72340172838076673 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.121 [2024-05-16 20:05:35.260735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:48.381 #47 NEW cov: 12173 ft: 14574 corp: 26/2361b lim: 120 exec/s: 47 rss: 71Mb L: 120/120 MS: 1 CrossOver- 00:06:48.381 [2024-05-16 20:05:35.310599] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:72340173647142145 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.381 [2024-05-16 20:05:35.310624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.381 [2024-05-16 20:05:35.310687] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:72340172838076673 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.381 [2024-05-16 20:05:35.310710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.381 [2024-05-16 20:05:35.310754] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:72340172849217793 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.381 [2024-05-16 20:05:35.310767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:48.381 [2024-05-16 20:05:35.310814] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:72340172838076673 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.381 [2024-05-16 20:05:35.310826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:48.381 [2024-05-16 20:05:35.310875] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:72340172838076673 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.381 [2024-05-16 20:05:35.310887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:48.381 #48 NEW cov: 12173 ft: 14587 corp: 27/2481b lim: 120 exec/s: 48 rss: 72Mb L: 120/120 MS: 1 ChangeByte- 00:06:48.381 [2024-05-16 20:05:35.350576] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13310591802071890104 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.381 [2024-05-16 20:05:35.350600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.381 [2024-05-16 20:05:35.350676] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13310591802206107832 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.381 [2024-05-16 20:05:35.350692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.381 [2024-05-16 20:05:35.350756] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:13310591802206107832 len:47106 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.381 [2024-05-16 20:05:35.350769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:48.381 [2024-05-16 20:05:35.350820] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:13310591802206107832 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.381 [2024-05-16 20:05:35.350832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:48.381 #49 NEW cov: 12173 ft: 14661 corp: 28/2591b lim: 120 exec/s: 49 rss: 72Mb L: 110/120 MS: 1 CrossOver- 00:06:48.381 [2024-05-16 20:05:35.390591] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13310591802071890104 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.381 [2024-05-16 20:05:35.390617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.381 [2024-05-16 20:05:35.390670] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13310591802206107832 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.381 [2024-05-16 20:05:35.390683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.381 [2024-05-16 20:05:35.390732] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:13310591802206107832 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.381 [2024-05-16 20:05:35.390745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:48.381 #50 NEW cov: 12173 ft: 14679 corp: 29/2680b lim: 120 exec/s: 50 rss: 72Mb L: 89/120 MS: 1 ShuffleBytes- 00:06:48.381 [2024-05-16 20:05:35.430822] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13310591802071890104 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.381 [2024-05-16 20:05:35.430846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.381 [2024-05-16 20:05:35.430911] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13310591802206107832 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.381 [2024-05-16 20:05:35.430924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.381 [2024-05-16 20:05:35.430971] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:13310591802206107832 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.381 [2024-05-16 20:05:35.430984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:48.381 [2024-05-16 20:05:35.431033] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.381 [2024-05-16 20:05:35.431045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:48.381 #51 NEW cov: 12173 ft: 14688 corp: 30/2792b lim: 120 exec/s: 51 rss: 72Mb L: 112/120 MS: 1 InsertRepeatedBytes- 00:06:48.381 [2024-05-16 20:05:35.470811] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13310591802071890104 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.381 [2024-05-16 20:05:35.470834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.381 [2024-05-16 20:05:35.470884] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13310591802206107832 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.381 [2024-05-16 20:05:35.470902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.381 [2024-05-16 20:05:35.470951] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:13310591802206107832 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.381 [2024-05-16 20:05:35.470962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:48.381 #52 NEW cov: 12173 ft: 14713 corp: 31/2864b lim: 120 exec/s: 52 rss: 72Mb L: 72/120 MS: 1 ShuffleBytes- 00:06:48.381 [2024-05-16 20:05:35.511429] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:72340173647142145 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.381 [2024-05-16 20:05:35.511458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.381 [2024-05-16 20:05:35.511520] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:72340172838076673 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.381 [2024-05-16 20:05:35.511533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.381 [2024-05-16 20:05:35.511582] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:72340172838076673 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.381 [2024-05-16 20:05:35.511595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:48.381 [2024-05-16 20:05:35.511655] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:72340172838076673 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.381 [2024-05-16 20:05:35.511668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:48.381 [2024-05-16 20:05:35.511714] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:72340172838076673 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.381 [2024-05-16 20:05:35.511727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:48.640 #53 NEW cov: 12173 ft: 14764 corp: 32/2984b lim: 120 exec/s: 53 rss: 72Mb L: 120/120 MS: 1 ChangeBinInt- 00:06:48.640 [2024-05-16 20:05:35.561054] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13310591802071890104 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.640 [2024-05-16 20:05:35.561078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.640 [2024-05-16 20:05:35.561128] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13310591802206107832 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.640 [2024-05-16 20:05:35.561141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.640 [2024-05-16 20:05:35.561189] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:13310591802206107832 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.640 [2024-05-16 20:05:35.561201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:48.640 #54 NEW cov: 12173 ft: 14784 corp: 33/3066b lim: 120 exec/s: 54 rss: 72Mb L: 82/120 MS: 1 ShuffleBytes- 00:06:48.640 [2024-05-16 20:05:35.601469] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:72340173647142145 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.640 [2024-05-16 20:05:35.601493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.640 [2024-05-16 20:05:35.601557] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:72340172838076673 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.640 [2024-05-16 20:05:35.601570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.640 [2024-05-16 20:05:35.601620] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:72340172838076673 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.640 [2024-05-16 20:05:35.601632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:48.640 [2024-05-16 20:05:35.601682] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:72340172838076673 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.640 [2024-05-16 20:05:35.601695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:48.640 [2024-05-16 20:05:35.601745] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:72340172838076673 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.640 [2024-05-16 20:05:35.601758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:48.640 #55 NEW cov: 12173 ft: 14808 corp: 34/3186b lim: 120 exec/s: 55 rss: 72Mb L: 120/120 MS: 1 ChangeBit- 00:06:48.640 [2024-05-16 20:05:35.651678] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:72340173647142145 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.640 [2024-05-16 20:05:35.651703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.640 [2024-05-16 20:05:35.651753] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:72340172838076673 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.641 [2024-05-16 20:05:35.651766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.641 [2024-05-16 20:05:35.651816] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:72340172838076673 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.641 [2024-05-16 20:05:35.651830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:48.641 [2024-05-16 20:05:35.651878] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:72340172838076673 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.641 [2024-05-16 20:05:35.651890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:48.641 [2024-05-16 20:05:35.651939] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:72340172838076673 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.641 [2024-05-16 20:05:35.651952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:48.641 #56 NEW cov: 12173 ft: 14815 corp: 35/3306b lim: 120 exec/s: 56 rss: 72Mb L: 120/120 MS: 1 ChangeByte- 00:06:48.641 [2024-05-16 20:05:35.701466] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13310591802071890104 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.641 [2024-05-16 20:05:35.701490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.641 [2024-05-16 20:05:35.701556] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13310591802206107832 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.641 [2024-05-16 20:05:35.701569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.641 [2024-05-16 20:05:35.701618] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:13310591802206107832 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.641 [2024-05-16 20:05:35.701633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:48.641 #57 NEW cov: 12173 ft: 14823 corp: 36/3388b lim: 120 exec/s: 57 rss: 72Mb L: 82/120 MS: 1 ShuffleBytes- 00:06:48.641 [2024-05-16 20:05:35.751802] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13310591802071890104 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.641 [2024-05-16 20:05:35.751826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.641 [2024-05-16 20:05:35.751876] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4557431408489873215 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.641 [2024-05-16 20:05:35.751889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.641 [2024-05-16 20:05:35.751937] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:13310591802206107832 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.641 [2024-05-16 20:05:35.751949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:48.641 [2024-05-16 20:05:35.752000] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:13310591802206107832 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.641 [2024-05-16 20:05:35.752012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:48.641 #58 NEW cov: 12173 ft: 14831 corp: 37/3484b lim: 120 exec/s: 58 rss: 72Mb L: 96/120 MS: 1 InsertRepeatedBytes- 00:06:48.900 [2024-05-16 20:05:35.791756] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13310591802071890104 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.900 [2024-05-16 20:05:35.791781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.900 [2024-05-16 20:05:35.791828] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13310591016227092664 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.900 [2024-05-16 20:05:35.791840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.900 [2024-05-16 20:05:35.791889] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:13310591802206107832 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.900 [2024-05-16 20:05:35.791903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:48.900 #59 NEW cov: 12173 ft: 14837 corp: 38/3571b lim: 120 exec/s: 59 rss: 72Mb L: 87/120 MS: 1 CrossOver- 00:06:48.900 [2024-05-16 20:05:35.842157] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:72340173647142145 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.900 [2024-05-16 20:05:35.842181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.900 [2024-05-16 20:05:35.842242] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:72340172838076673 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.900 [2024-05-16 20:05:35.842255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.900 [2024-05-16 20:05:35.842303] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:72340172838076673 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.900 [2024-05-16 20:05:35.842315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:48.900 [2024-05-16 20:05:35.842363] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:72340172838076673 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.900 [2024-05-16 20:05:35.842379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:48.900 [2024-05-16 20:05:35.842427] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:72340172838076673 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.900 [2024-05-16 20:05:35.842439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:48.900 [2024-05-16 20:05:35.892318] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:72340173647142145 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.900 [2024-05-16 20:05:35.892342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.900 [2024-05-16 20:05:35.892406] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:72340172838076673 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.900 [2024-05-16 20:05:35.892416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.900 [2024-05-16 20:05:35.892465] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:72340172838076673 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.900 [2024-05-16 20:05:35.892478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:48.900 [2024-05-16 20:05:35.892526] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:72340172838076673 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.900 [2024-05-16 20:05:35.892539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:48.900 [2024-05-16 20:05:35.892589] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:72340172838076673 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.900 [2024-05-16 20:05:35.892602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:48.900 #61 NEW cov: 12173 ft: 14849 corp: 39/3691b lim: 120 exec/s: 61 rss: 72Mb L: 120/120 MS: 2 ChangeByte-CopyPart- 00:06:48.900 [2024-05-16 20:05:35.932123] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13310591802071890104 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.900 [2024-05-16 20:05:35.932147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.900 [2024-05-16 20:05:35.932208] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13310591802206107832 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.900 [2024-05-16 20:05:35.932221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.900 [2024-05-16 20:05:35.932270] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:13310591802206107832 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.900 [2024-05-16 20:05:35.932283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:48.900 #62 NEW cov: 12173 ft: 14855 corp: 40/3764b lim: 120 exec/s: 62 rss: 72Mb L: 73/120 MS: 1 ShuffleBytes- 00:06:48.900 [2024-05-16 20:05:35.972237] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:72340173647142145 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.900 [2024-05-16 20:05:35.972261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.900 [2024-05-16 20:05:35.972332] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:72340172838076673 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.900 [2024-05-16 20:05:35.972347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.900 [2024-05-16 20:05:35.972398] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:72340172838076673 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.900 [2024-05-16 20:05:35.972410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:48.900 #63 NEW cov: 12173 ft: 14884 corp: 41/3850b lim: 120 exec/s: 63 rss: 72Mb L: 86/120 MS: 1 EraseBytes- 00:06:48.900 [2024-05-16 20:05:36.012296] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13310591802071890104 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.900 [2024-05-16 20:05:36.012319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.900 [2024-05-16 20:05:36.012388] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13310591802206107832 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.900 [2024-05-16 20:05:36.012401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.900 [2024-05-16 20:05:36.012449] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:13310591802071890104 len:47289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.900 [2024-05-16 20:05:36.012467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:48.900 #64 NEW cov: 12173 ft: 14891 corp: 42/3928b lim: 120 exec/s: 64 rss: 72Mb L: 78/120 MS: 1 CrossOver- 00:06:49.160 [2024-05-16 20:05:36.052786] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:72340173110271233 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.160 [2024-05-16 20:05:36.052810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:49.160 [2024-05-16 20:05:36.052874] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:72340172838076673 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.160 [2024-05-16 20:05:36.052885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:49.160 [2024-05-16 20:05:36.052935] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:72340172838076673 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.160 [2024-05-16 20:05:36.052948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:49.160 [2024-05-16 20:05:36.052996] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:72340172838076673 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.160 [2024-05-16 20:05:36.053009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:49.160 [2024-05-16 20:05:36.053060] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:72340172838076673 len:514 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.160 [2024-05-16 20:05:36.053072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:49.160 [2024-05-16 20:05:36.102953] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:72340173110271233 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.160 [2024-05-16 20:05:36.102977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:49.160 [2024-05-16 20:05:36.103039] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:72340172838076673 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.160 [2024-05-16 20:05:36.103050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:49.160 [2024-05-16 20:05:36.103104] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:72340172838076673 len:512 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.160 [2024-05-16 20:05:36.103117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:49.160 [2024-05-16 20:05:36.103166] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:72340172838076673 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.160 [2024-05-16 20:05:36.103179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:49.160 [2024-05-16 20:05:36.103229] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:72340172838076673 len:514 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.160 [2024-05-16 20:05:36.103240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:49.160 #66 NEW cov: 12173 ft: 14900 corp: 43/4048b lim: 120 exec/s: 33 rss: 72Mb L: 120/120 MS: 2 ChangeBit-ChangeByte- 00:06:49.160 #66 DONE cov: 12173 ft: 14900 corp: 43/4048b lim: 120 exec/s: 33 rss: 72Mb 00:06:49.160 Done 66 runs in 2 second(s) 00:06:49.160 [2024-05-16 20:05:36.124511] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:49.160 20:05:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_17.conf /var/tmp/suppress_nvmf_fuzz 00:06:49.160 20:05:36 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:49.160 20:05:36 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:49.160 20:05:36 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 18 1 0x1 00:06:49.160 20:05:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=18 00:06:49.160 20:05:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:49.161 20:05:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:49.161 20:05:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:06:49.161 20:05:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_18.conf 00:06:49.161 20:05:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:49.161 20:05:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:49.161 20:05:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 18 00:06:49.161 20:05:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4418 00:06:49.161 20:05:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:06:49.161 20:05:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' 00:06:49.161 20:05:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4418"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:49.161 20:05:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:49.161 20:05:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:49.161 20:05:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' -c /tmp/fuzz_json_18.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 -Z 18 00:06:49.161 [2024-05-16 20:05:36.290837] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:49.161 [2024-05-16 20:05:36.290916] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1670454 ] 00:06:49.420 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.420 [2024-05-16 20:05:36.452655] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.420 [2024-05-16 20:05:36.517662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.679 [2024-05-16 20:05:36.576267] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:49.679 [2024-05-16 20:05:36.592229] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:49.679 [2024-05-16 20:05:36.592572] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4418 *** 00:06:49.679 INFO: Running with entropic power schedule (0xFF, 100). 00:06:49.679 INFO: Seed: 4291971458 00:06:49.679 INFO: Loaded 1 modules (357283 inline 8-bit counters): 357283 [0x299c0cc, 0x29f346f), 00:06:49.679 INFO: Loaded 1 PC tables (357283 PCs): 357283 [0x29f3470,0x2f66ea0), 00:06:49.679 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:06:49.679 INFO: A corpus is not provided, starting from an empty corpus 00:06:49.679 #2 INITED exec/s: 0 rss: 64Mb 00:06:49.679 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:49.679 This may also happen if the target rejected all inputs we tried so far 00:06:49.679 [2024-05-16 20:05:36.637206] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:49.679 [2024-05-16 20:05:36.637236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:49.679 [2024-05-16 20:05:36.637267] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:49.679 [2024-05-16 20:05:36.637281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:49.679 [2024-05-16 20:05:36.637307] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:49.679 [2024-05-16 20:05:36.637320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:49.679 NEW_FUNC[1/685]: 0x4a0450 in fuzz_nvm_write_zeroes_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:562 00:06:49.679 NEW_FUNC[2/685]: 0x4bf420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:49.679 #13 NEW cov: 11872 ft: 11865 corp: 2/65b lim: 100 exec/s: 0 rss: 71Mb L: 64/64 MS: 1 InsertRepeatedBytes- 00:06:49.679 [2024-05-16 20:05:36.797567] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:49.679 [2024-05-16 20:05:36.797602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:49.679 [2024-05-16 20:05:36.797648] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:49.679 [2024-05-16 20:05:36.797662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:49.679 [2024-05-16 20:05:36.797689] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:49.680 [2024-05-16 20:05:36.797701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:49.938 #14 NEW cov: 12002 ft: 12469 corp: 3/129b lim: 100 exec/s: 0 rss: 71Mb L: 64/64 MS: 1 ChangeBit- 00:06:49.938 [2024-05-16 20:05:36.877642] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:49.938 [2024-05-16 20:05:36.877670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:49.938 [2024-05-16 20:05:36.877715] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:49.938 [2024-05-16 20:05:36.877729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:49.938 #21 NEW cov: 12008 ft: 12874 corp: 4/180b lim: 100 exec/s: 0 rss: 71Mb L: 51/64 MS: 2 ShuffleBytes-CrossOver- 00:06:49.938 [2024-05-16 20:05:36.937717] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:49.938 [2024-05-16 20:05:36.937744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:49.938 #22 NEW cov: 12093 ft: 13443 corp: 5/211b lim: 100 exec/s: 0 rss: 71Mb L: 31/64 MS: 1 CrossOver- 00:06:49.938 [2024-05-16 20:05:37.018041] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:49.938 [2024-05-16 20:05:37.018068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:49.938 [2024-05-16 20:05:37.018097] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:49.938 [2024-05-16 20:05:37.018110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:49.938 [2024-05-16 20:05:37.018137] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:49.938 [2024-05-16 20:05:37.018150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:49.938 #23 NEW cov: 12093 ft: 13509 corp: 6/275b lim: 100 exec/s: 0 rss: 71Mb L: 64/64 MS: 1 ShuffleBytes- 00:06:49.938 [2024-05-16 20:05:37.068141] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:49.938 [2024-05-16 20:05:37.068167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:49.938 [2024-05-16 20:05:37.068209] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:49.938 [2024-05-16 20:05:37.068222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:49.938 [2024-05-16 20:05:37.068249] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:49.938 [2024-05-16 20:05:37.068261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:50.197 #24 NEW cov: 12093 ft: 13544 corp: 7/340b lim: 100 exec/s: 0 rss: 72Mb L: 65/65 MS: 1 InsertByte- 00:06:50.197 [2024-05-16 20:05:37.118336] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:50.197 [2024-05-16 20:05:37.118363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.197 [2024-05-16 20:05:37.118392] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:50.197 [2024-05-16 20:05:37.118404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:50.197 [2024-05-16 20:05:37.118430] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:50.197 [2024-05-16 20:05:37.118442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:50.197 [2024-05-16 20:05:37.118489] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:50.197 [2024-05-16 20:05:37.118501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:50.197 #25 NEW cov: 12093 ft: 13922 corp: 8/439b lim: 100 exec/s: 0 rss: 72Mb L: 99/99 MS: 1 CrossOver- 00:06:50.197 [2024-05-16 20:05:37.198471] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:50.197 [2024-05-16 20:05:37.198497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.197 [2024-05-16 20:05:37.198550] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:50.197 [2024-05-16 20:05:37.198567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:50.197 #26 NEW cov: 12093 ft: 14001 corp: 9/490b lim: 100 exec/s: 0 rss: 72Mb L: 51/99 MS: 1 ChangeBinInt- 00:06:50.197 [2024-05-16 20:05:37.258663] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:50.197 [2024-05-16 20:05:37.258691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.197 [2024-05-16 20:05:37.258736] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:50.197 [2024-05-16 20:05:37.258750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:50.197 [2024-05-16 20:05:37.258777] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:50.197 [2024-05-16 20:05:37.258791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:50.197 #27 NEW cov: 12093 ft: 14051 corp: 10/554b lim: 100 exec/s: 0 rss: 72Mb L: 64/99 MS: 1 ChangeBit- 00:06:50.197 [2024-05-16 20:05:37.338891] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:50.197 [2024-05-16 20:05:37.338918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.198 [2024-05-16 20:05:37.338960] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:50.198 [2024-05-16 20:05:37.338974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:50.198 [2024-05-16 20:05:37.339000] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:50.198 [2024-05-16 20:05:37.339013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:50.198 [2024-05-16 20:05:37.339037] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:50.198 [2024-05-16 20:05:37.339049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:50.456 #28 NEW cov: 12093 ft: 14093 corp: 11/639b lim: 100 exec/s: 0 rss: 72Mb L: 85/99 MS: 1 CopyPart- 00:06:50.456 [2024-05-16 20:05:37.419054] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:50.456 [2024-05-16 20:05:37.419082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.456 [2024-05-16 20:05:37.419110] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:50.456 [2024-05-16 20:05:37.419123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:50.456 [2024-05-16 20:05:37.419149] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:50.456 [2024-05-16 20:05:37.419161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:50.456 #29 NEW cov: 12093 ft: 14127 corp: 12/704b lim: 100 exec/s: 0 rss: 72Mb L: 65/99 MS: 1 InsertByte- 00:06:50.456 [2024-05-16 20:05:37.479313] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:50.456 [2024-05-16 20:05:37.479339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.456 [2024-05-16 20:05:37.479383] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:50.456 [2024-05-16 20:05:37.479396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:50.456 [2024-05-16 20:05:37.479423] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:50.456 [2024-05-16 20:05:37.479439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:50.456 [2024-05-16 20:05:37.479483] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:50.456 [2024-05-16 20:05:37.479496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:50.456 NEW_FUNC[1/1]: 0x1a6ef60 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:50.456 #30 NEW cov: 12110 ft: 14171 corp: 13/789b lim: 100 exec/s: 0 rss: 72Mb L: 85/99 MS: 1 ChangeByte- 00:06:50.456 [2024-05-16 20:05:37.559340] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:50.456 [2024-05-16 20:05:37.559369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.716 #31 NEW cov: 12110 ft: 14214 corp: 14/820b lim: 100 exec/s: 31 rss: 72Mb L: 31/99 MS: 1 ChangeBit- 00:06:50.716 [2024-05-16 20:05:37.639638] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:50.716 [2024-05-16 20:05:37.639666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.716 [2024-05-16 20:05:37.639709] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:50.716 [2024-05-16 20:05:37.639722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:50.716 [2024-05-16 20:05:37.639748] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:50.716 [2024-05-16 20:05:37.639760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:50.716 #32 NEW cov: 12110 ft: 14229 corp: 15/885b lim: 100 exec/s: 32 rss: 72Mb L: 65/99 MS: 1 CopyPart- 00:06:50.716 [2024-05-16 20:05:37.721018] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:50.716 [2024-05-16 20:05:37.721065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.716 [2024-05-16 20:05:37.721142] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:50.716 [2024-05-16 20:05:37.721166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:50.716 [2024-05-16 20:05:37.721241] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:50.716 [2024-05-16 20:05:37.721262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:50.716 #33 NEW cov: 12110 ft: 14398 corp: 16/950b lim: 100 exec/s: 33 rss: 72Mb L: 65/99 MS: 1 InsertByte- 00:06:50.716 [2024-05-16 20:05:37.760870] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:50.716 [2024-05-16 20:05:37.760897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.716 [2024-05-16 20:05:37.760950] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:50.716 [2024-05-16 20:05:37.760963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:50.716 [2024-05-16 20:05:37.761016] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:50.716 [2024-05-16 20:05:37.761029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:50.716 #34 NEW cov: 12110 ft: 14534 corp: 17/1018b lim: 100 exec/s: 34 rss: 72Mb L: 68/99 MS: 1 CMP- DE: "\000\000\000\366"- 00:06:50.716 [2024-05-16 20:05:37.801064] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:50.716 [2024-05-16 20:05:37.801093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.716 [2024-05-16 20:05:37.801140] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:50.716 [2024-05-16 20:05:37.801153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:50.716 [2024-05-16 20:05:37.801205] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:50.716 [2024-05-16 20:05:37.801217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:50.716 [2024-05-16 20:05:37.801270] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:50.716 [2024-05-16 20:05:37.801281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:50.716 #35 NEW cov: 12110 ft: 14547 corp: 18/1117b lim: 100 exec/s: 35 rss: 72Mb L: 99/99 MS: 1 ChangeByte- 00:06:50.716 [2024-05-16 20:05:37.851246] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:50.716 [2024-05-16 20:05:37.851271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.716 [2024-05-16 20:05:37.851342] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:50.716 [2024-05-16 20:05:37.851354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:50.716 [2024-05-16 20:05:37.851407] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:50.716 [2024-05-16 20:05:37.851420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:50.716 [2024-05-16 20:05:37.851476] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:50.716 [2024-05-16 20:05:37.851488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:50.975 #36 NEW cov: 12110 ft: 14647 corp: 19/1202b lim: 100 exec/s: 36 rss: 72Mb L: 85/99 MS: 1 ChangeBit- 00:06:50.975 [2024-05-16 20:05:37.901266] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:50.975 [2024-05-16 20:05:37.901293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.975 [2024-05-16 20:05:37.901360] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:50.975 [2024-05-16 20:05:37.901373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:50.975 [2024-05-16 20:05:37.901423] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:50.975 [2024-05-16 20:05:37.901435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:50.975 #37 NEW cov: 12110 ft: 14796 corp: 20/1270b lim: 100 exec/s: 37 rss: 72Mb L: 68/99 MS: 1 CrossOver- 00:06:50.975 [2024-05-16 20:05:37.941356] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:50.975 [2024-05-16 20:05:37.941381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.975 [2024-05-16 20:05:37.941448] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:50.975 [2024-05-16 20:05:37.941466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:50.975 [2024-05-16 20:05:37.941520] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:50.976 [2024-05-16 20:05:37.941535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:50.976 #38 NEW cov: 12110 ft: 14872 corp: 21/1338b lim: 100 exec/s: 38 rss: 72Mb L: 68/99 MS: 1 ChangeByte- 00:06:50.976 [2024-05-16 20:05:37.991547] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:50.976 [2024-05-16 20:05:37.991571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.976 [2024-05-16 20:05:37.991643] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:50.976 [2024-05-16 20:05:37.991656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:50.976 [2024-05-16 20:05:37.991710] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:50.976 [2024-05-16 20:05:37.991723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:50.976 #39 NEW cov: 12110 ft: 14885 corp: 22/1402b lim: 100 exec/s: 39 rss: 72Mb L: 64/99 MS: 1 ChangeByte- 00:06:50.976 [2024-05-16 20:05:38.031786] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:50.976 [2024-05-16 20:05:38.031810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.976 [2024-05-16 20:05:38.031878] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:50.976 [2024-05-16 20:05:38.031888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:50.976 [2024-05-16 20:05:38.031937] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:50.976 [2024-05-16 20:05:38.031949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:50.976 [2024-05-16 20:05:38.032003] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:50.976 [2024-05-16 20:05:38.032015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:50.976 #40 NEW cov: 12110 ft: 14903 corp: 23/1487b lim: 100 exec/s: 40 rss: 72Mb L: 85/99 MS: 1 CopyPart- 00:06:50.976 [2024-05-16 20:05:38.071627] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:50.976 [2024-05-16 20:05:38.071651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.976 [2024-05-16 20:05:38.071693] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:50.976 [2024-05-16 20:05:38.071706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:50.976 #41 NEW cov: 12110 ft: 14923 corp: 24/1538b lim: 100 exec/s: 41 rss: 72Mb L: 51/99 MS: 1 CopyPart- 00:06:50.976 [2024-05-16 20:05:38.111851] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:50.976 [2024-05-16 20:05:38.111875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.976 [2024-05-16 20:05:38.111945] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:50.976 [2024-05-16 20:05:38.111958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:50.976 [2024-05-16 20:05:38.112010] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:50.976 [2024-05-16 20:05:38.112023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:51.235 #42 NEW cov: 12110 ft: 14983 corp: 25/1603b lim: 100 exec/s: 42 rss: 73Mb L: 65/99 MS: 1 CMP- DE: "\255\017\241Eg\245\006\000"- 00:06:51.235 [2024-05-16 20:05:38.161988] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:51.235 [2024-05-16 20:05:38.162012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:51.235 [2024-05-16 20:05:38.162081] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:51.235 [2024-05-16 20:05:38.162094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:51.235 [2024-05-16 20:05:38.162146] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:51.235 [2024-05-16 20:05:38.162158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:51.235 #43 NEW cov: 12110 ft: 15023 corp: 26/1667b lim: 100 exec/s: 43 rss: 73Mb L: 64/99 MS: 1 ChangeByte- 00:06:51.235 [2024-05-16 20:05:38.202131] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:51.235 [2024-05-16 20:05:38.202154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:51.235 [2024-05-16 20:05:38.202229] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:51.235 [2024-05-16 20:05:38.202241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:51.235 [2024-05-16 20:05:38.202294] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:51.235 [2024-05-16 20:05:38.202307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:51.235 #44 NEW cov: 12110 ft: 15108 corp: 27/1732b lim: 100 exec/s: 44 rss: 73Mb L: 65/99 MS: 1 InsertByte- 00:06:51.235 [2024-05-16 20:05:38.252162] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:51.235 [2024-05-16 20:05:38.252199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:51.235 [2024-05-16 20:05:38.252253] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:51.235 [2024-05-16 20:05:38.252266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:51.235 [2024-05-16 20:05:38.292315] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:51.235 [2024-05-16 20:05:38.292342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:51.235 [2024-05-16 20:05:38.292380] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:51.235 [2024-05-16 20:05:38.292392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:51.235 #48 NEW cov: 12110 ft: 15119 corp: 28/1772b lim: 100 exec/s: 48 rss: 73Mb L: 40/99 MS: 4 CrossOver-ShuffleBytes-InsertRepeatedBytes-ShuffleBytes- 00:06:51.235 [2024-05-16 20:05:38.332538] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:51.235 [2024-05-16 20:05:38.332561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:51.235 [2024-05-16 20:05:38.332630] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:51.235 [2024-05-16 20:05:38.332642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:51.235 [2024-05-16 20:05:38.332698] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:51.235 [2024-05-16 20:05:38.332710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:51.235 #49 NEW cov: 12110 ft: 15162 corp: 29/1836b lim: 100 exec/s: 49 rss: 73Mb L: 64/99 MS: 1 ShuffleBytes- 00:06:51.522 [2024-05-16 20:05:38.382963] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:51.522 [2024-05-16 20:05:38.382989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:51.522 [2024-05-16 20:05:38.383040] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:51.522 [2024-05-16 20:05:38.383050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:51.522 [2024-05-16 20:05:38.383104] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:51.522 [2024-05-16 20:05:38.383116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:51.522 [2024-05-16 20:05:38.383168] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:51.522 [2024-05-16 20:05:38.383179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:51.522 [2024-05-16 20:05:38.383230] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:06:51.522 [2024-05-16 20:05:38.383242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:51.522 #50 NEW cov: 12110 ft: 15214 corp: 30/1936b lim: 100 exec/s: 50 rss: 73Mb L: 100/100 MS: 1 CopyPart- 00:06:51.522 [2024-05-16 20:05:38.422587] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:51.522 [2024-05-16 20:05:38.422624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:51.522 #53 NEW cov: 12110 ft: 15219 corp: 31/1975b lim: 100 exec/s: 53 rss: 73Mb L: 39/100 MS: 3 ShuffleBytes-ChangeByte-CrossOver- 00:06:51.522 [2024-05-16 20:05:38.462902] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:51.522 [2024-05-16 20:05:38.462928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:51.522 [2024-05-16 20:05:38.462979] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:51.522 [2024-05-16 20:05:38.462993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:51.522 [2024-05-16 20:05:38.463044] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:51.522 [2024-05-16 20:05:38.463057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:51.522 #54 NEW cov: 12110 ft: 15237 corp: 32/2040b lim: 100 exec/s: 54 rss: 73Mb L: 65/100 MS: 1 ChangeBit- 00:06:51.522 [2024-05-16 20:05:38.513230] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:51.522 [2024-05-16 20:05:38.513255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:51.522 [2024-05-16 20:05:38.513326] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:51.522 [2024-05-16 20:05:38.513338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:51.522 [2024-05-16 20:05:38.513389] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:51.522 [2024-05-16 20:05:38.513402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:51.522 #55 NEW cov: 12117 ft: 15293 corp: 33/2108b lim: 100 exec/s: 55 rss: 73Mb L: 68/100 MS: 1 ShuffleBytes- 00:06:51.522 [2024-05-16 20:05:38.563058] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:51.522 [2024-05-16 20:05:38.563082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:51.522 [2024-05-16 20:05:38.563135] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:51.522 [2024-05-16 20:05:38.563148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:51.522 #56 NEW cov: 12117 ft: 15299 corp: 34/2159b lim: 100 exec/s: 56 rss: 73Mb L: 51/100 MS: 1 CMP- DE: "\001\000\001\263"- 00:06:51.522 [2024-05-16 20:05:38.613191] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:51.522 [2024-05-16 20:05:38.613216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:51.522 [2024-05-16 20:05:38.613256] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:51.522 [2024-05-16 20:05:38.613269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:51.522 #57 NEW cov: 12117 ft: 15349 corp: 35/2210b lim: 100 exec/s: 28 rss: 74Mb L: 51/100 MS: 1 ChangeBinInt- 00:06:51.522 #57 DONE cov: 12117 ft: 15349 corp: 35/2210b lim: 100 exec/s: 28 rss: 74Mb 00:06:51.522 ###### Recommended dictionary. ###### 00:06:51.522 "\000\000\000\366" # Uses: 0 00:06:51.522 "\255\017\241Eg\245\006\000" # Uses: 0 00:06:51.522 "\001\000\001\263" # Uses: 0 00:06:51.522 ###### End of recommended dictionary. ###### 00:06:51.522 Done 57 runs in 2 second(s) 00:06:51.522 [2024-05-16 20:05:38.649416] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:51.795 20:05:38 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_18.conf /var/tmp/suppress_nvmf_fuzz 00:06:51.795 20:05:38 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:51.795 20:05:38 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:51.795 20:05:38 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 19 1 0x1 00:06:51.795 20:05:38 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=19 00:06:51.795 20:05:38 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:51.795 20:05:38 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:51.795 20:05:38 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:06:51.795 20:05:38 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_19.conf 00:06:51.795 20:05:38 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:51.795 20:05:38 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:51.795 20:05:38 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 19 00:06:51.795 20:05:38 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4419 00:06:51.795 20:05:38 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:06:51.795 20:05:38 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' 00:06:51.795 20:05:38 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4419"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:51.795 20:05:38 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:51.795 20:05:38 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:51.795 20:05:38 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' -c /tmp/fuzz_json_19.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 -Z 19 00:06:51.795 [2024-05-16 20:05:38.816389] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:51.795 [2024-05-16 20:05:38.816473] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1670854 ] 00:06:51.795 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.055 [2024-05-16 20:05:38.985871] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.055 [2024-05-16 20:05:39.050747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.055 [2024-05-16 20:05:39.109753] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:52.055 [2024-05-16 20:05:39.125716] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:52.055 [2024-05-16 20:05:39.126069] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 *** 00:06:52.055 INFO: Running with entropic power schedule (0xFF, 100). 00:06:52.055 INFO: Seed: 2529048381 00:06:52.055 INFO: Loaded 1 modules (357283 inline 8-bit counters): 357283 [0x299c0cc, 0x29f346f), 00:06:52.055 INFO: Loaded 1 PC tables (357283 PCs): 357283 [0x29f3470,0x2f66ea0), 00:06:52.055 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:06:52.055 INFO: A corpus is not provided, starting from an empty corpus 00:06:52.055 #2 INITED exec/s: 0 rss: 63Mb 00:06:52.055 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:52.055 This may also happen if the target rejected all inputs we tried so far 00:06:52.055 [2024-05-16 20:05:39.186443] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:06:52.055 [2024-05-16 20:05:39.186491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.313 NEW_FUNC[1/685]: 0x4a3410 in fuzz_nvm_write_uncorrectable_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:582 00:06:52.313 NEW_FUNC[2/685]: 0x4bf420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:52.313 #12 NEW cov: 11839 ft: 11834 corp: 2/14b lim: 50 exec/s: 0 rss: 71Mb L: 13/13 MS: 5 ChangeBit-ChangeBit-ChangeByte-InsertByte-InsertRepeatedBytes- 00:06:52.313 [2024-05-16 20:05:39.346684] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069603393535 len:65536 00:06:52.313 [2024-05-16 20:05:39.346724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.313 #16 NEW cov: 11980 ft: 12419 corp: 3/26b lim: 50 exec/s: 0 rss: 71Mb L: 12/13 MS: 4 InsertByte-InsertByte-ChangeBit-InsertRepeatedBytes- 00:06:52.313 [2024-05-16 20:05:39.397551] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:256 00:06:52.313 [2024-05-16 20:05:39.397581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.313 [2024-05-16 20:05:39.397645] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:06:52.313 [2024-05-16 20:05:39.397659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.314 [2024-05-16 20:05:39.397747] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:06:52.314 [2024-05-16 20:05:39.397762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:52.314 #17 NEW cov: 11986 ft: 13022 corp: 4/64b lim: 50 exec/s: 0 rss: 72Mb L: 38/38 MS: 1 InsertRepeatedBytes- 00:06:52.314 [2024-05-16 20:05:39.457833] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446743523953737727 len:2881 00:06:52.314 [2024-05-16 20:05:39.457863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.314 [2024-05-16 20:05:39.457943] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65331 00:06:52.314 [2024-05-16 20:05:39.457957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.572 #18 NEW cov: 12071 ft: 13493 corp: 5/84b lim: 50 exec/s: 0 rss: 72Mb L: 20/38 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\177"- 00:06:52.572 [2024-05-16 20:05:39.518291] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5208492447423791103 len:18505 00:06:52.572 [2024-05-16 20:05:39.518318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.572 [2024-05-16 20:05:39.518382] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:5208492444341520456 len:18505 00:06:52.572 [2024-05-16 20:05:39.518398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.572 #23 NEW cov: 12071 ft: 13578 corp: 6/110b lim: 50 exec/s: 0 rss: 72Mb L: 26/38 MS: 5 InsertByte-ChangeBit-ChangeByte-CMP-InsertRepeatedBytes- DE: "\377\377\377\377"- 00:06:52.572 [2024-05-16 20:05:39.568550] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:137700946477056 len:1 00:06:52.572 [2024-05-16 20:05:39.568578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.572 #24 NEW cov: 12071 ft: 13653 corp: 7/123b lim: 50 exec/s: 0 rss: 72Mb L: 13/38 MS: 1 CopyPart- 00:06:52.572 [2024-05-16 20:05:39.619501] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5208492447423791103 len:18505 00:06:52.572 [2024-05-16 20:05:39.619527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.572 [2024-05-16 20:05:39.619597] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:5208492444341520456 len:18688 00:06:52.572 [2024-05-16 20:05:39.619615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.572 [2024-05-16 20:05:39.619694] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5208492447423790920 len:3793 00:06:52.572 [2024-05-16 20:05:39.619710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:52.572 #25 NEW cov: 12071 ft: 13733 corp: 8/153b lim: 50 exec/s: 0 rss: 72Mb L: 30/38 MS: 1 PersAutoDict- DE: "\377\377\377\377"- 00:06:52.572 [2024-05-16 20:05:39.679753] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:256 00:06:52.572 [2024-05-16 20:05:39.679782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.572 [2024-05-16 20:05:39.679873] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:06:52.572 [2024-05-16 20:05:39.679893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.572 [2024-05-16 20:05:39.679978] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:06:52.572 [2024-05-16 20:05:39.680007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:52.572 #31 NEW cov: 12071 ft: 13789 corp: 9/191b lim: 50 exec/s: 0 rss: 72Mb L: 38/38 MS: 1 PersAutoDict- DE: "\377\377\377\377"- 00:06:52.832 [2024-05-16 20:05:39.740015] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5208492447423791103 len:18505 00:06:52.832 [2024-05-16 20:05:39.740043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.832 [2024-05-16 20:05:39.740130] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:5208492444341520456 len:18505 00:06:52.832 [2024-05-16 20:05:39.740147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.832 [2024-05-16 20:05:39.740227] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:1067353112899504200 len:1 00:06:52.832 [2024-05-16 20:05:39.740244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:52.832 #32 NEW cov: 12071 ft: 13833 corp: 10/228b lim: 50 exec/s: 0 rss: 72Mb L: 37/38 MS: 1 InsertRepeatedBytes- 00:06:52.832 [2024-05-16 20:05:39.789981] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:188809215 len:1 00:06:52.832 [2024-05-16 20:05:39.790007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.832 [2024-05-16 20:05:39.790072] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744069414649855 len:65331 00:06:52.832 [2024-05-16 20:05:39.790089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.832 #33 NEW cov: 12071 ft: 13885 corp: 11/248b lim: 50 exec/s: 0 rss: 72Mb L: 20/38 MS: 1 InsertRepeatedBytes- 00:06:52.832 [2024-05-16 20:05:39.840039] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069603337215 len:65536 00:06:52.832 [2024-05-16 20:05:39.840067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.832 #34 NEW cov: 12071 ft: 13984 corp: 12/260b lim: 50 exec/s: 0 rss: 72Mb L: 12/38 MS: 1 ChangeByte- 00:06:52.832 [2024-05-16 20:05:39.890648] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1095417921600 len:1 00:06:52.832 [2024-05-16 20:05:39.890674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.832 [2024-05-16 20:05:39.890745] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744069414649855 len:65331 00:06:52.832 [2024-05-16 20:05:39.890761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.832 #35 NEW cov: 12071 ft: 14022 corp: 13/280b lim: 50 exec/s: 0 rss: 72Mb L: 20/38 MS: 1 ShuffleBytes- 00:06:52.832 [2024-05-16 20:05:39.951252] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5208492447423791103 len:18505 00:06:52.832 [2024-05-16 20:05:39.951280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.832 [2024-05-16 20:05:39.951350] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:5208492444341520456 len:18505 00:06:52.832 [2024-05-16 20:05:39.951367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.832 [2024-05-16 20:05:39.951444] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:1067353112899504200 len:133 00:06:52.832 [2024-05-16 20:05:39.951476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.091 #36 NEW cov: 12071 ft: 14039 corp: 14/318b lim: 50 exec/s: 0 rss: 72Mb L: 38/38 MS: 1 InsertByte- 00:06:53.091 [2024-05-16 20:05:40.011879] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5208492447423791103 len:18505 00:06:53.091 [2024-05-16 20:05:40.011911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.091 [2024-05-16 20:05:40.011972] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:5208492444341520456 len:18505 00:06:53.091 [2024-05-16 20:05:40.011989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.091 [2024-05-16 20:05:40.012062] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5192316120036034716 len:1 00:06:53.091 [2024-05-16 20:05:40.012082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.091 #37 NEW cov: 12071 ft: 14055 corp: 15/356b lim: 50 exec/s: 0 rss: 72Mb L: 38/38 MS: 1 InsertByte- 00:06:53.091 [2024-05-16 20:05:40.061637] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446743523953737727 len:2881 00:06:53.091 [2024-05-16 20:05:40.061667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.091 [2024-05-16 20:05:40.061721] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744039349813247 len:65331 00:06:53.091 [2024-05-16 20:05:40.061741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.091 NEW_FUNC[1/1]: 0x1a6ef60 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:53.091 #38 NEW cov: 12094 ft: 14099 corp: 16/376b lim: 50 exec/s: 0 rss: 72Mb L: 20/38 MS: 1 ChangeBit- 00:06:53.091 [2024-05-16 20:05:40.131951] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1095417921600 len:1 00:06:53.091 [2024-05-16 20:05:40.131981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.091 [2024-05-16 20:05:40.132046] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744069414649855 len:65331 00:06:53.091 [2024-05-16 20:05:40.132062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.091 #39 NEW cov: 12094 ft: 14114 corp: 17/396b lim: 50 exec/s: 39 rss: 72Mb L: 20/38 MS: 1 PersAutoDict- DE: "\377\377\377\377"- 00:06:53.091 [2024-05-16 20:05:40.192118] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1095417921600 len:1 00:06:53.091 [2024-05-16 20:05:40.192146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.091 [2024-05-16 20:05:40.192227] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744069414649855 len:65331 00:06:53.091 [2024-05-16 20:05:40.192246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.092 #40 NEW cov: 12094 ft: 14124 corp: 18/416b lim: 50 exec/s: 40 rss: 73Mb L: 20/38 MS: 1 ChangeASCIIInt- 00:06:53.351 [2024-05-16 20:05:40.252603] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5208492447423791103 len:18505 00:06:53.351 [2024-05-16 20:05:40.252632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.351 [2024-05-16 20:05:40.252702] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:5208633181829875784 len:18688 00:06:53.351 [2024-05-16 20:05:40.252723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.351 [2024-05-16 20:05:40.252805] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5208492447423790920 len:3793 00:06:53.351 [2024-05-16 20:05:40.252825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.351 #41 NEW cov: 12094 ft: 14152 corp: 19/446b lim: 50 exec/s: 41 rss: 73Mb L: 30/38 MS: 1 ChangeBit- 00:06:53.351 [2024-05-16 20:05:40.312382] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:71776119262478400 len:1 00:06:53.351 [2024-05-16 20:05:40.312411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.351 #42 NEW cov: 12094 ft: 14162 corp: 20/464b lim: 50 exec/s: 42 rss: 73Mb L: 18/38 MS: 1 EraseBytes- 00:06:53.351 [2024-05-16 20:05:40.372696] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:261993037056 len:1 00:06:53.351 [2024-05-16 20:05:40.372728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.351 #43 NEW cov: 12094 ft: 14200 corp: 21/477b lim: 50 exec/s: 43 rss: 73Mb L: 13/38 MS: 1 ShuffleBytes- 00:06:53.351 [2024-05-16 20:05:40.433378] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5208492447423791103 len:18505 00:06:53.351 [2024-05-16 20:05:40.433409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.351 [2024-05-16 20:05:40.433498] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:5208492444341517128 len:18505 00:06:53.351 [2024-05-16 20:05:40.433515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.351 [2024-05-16 20:05:40.433593] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5192316120036034632 len:1 00:06:53.351 [2024-05-16 20:05:40.433611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.351 #44 NEW cov: 12094 ft: 14214 corp: 22/516b lim: 50 exec/s: 44 rss: 73Mb L: 39/39 MS: 1 InsertByte- 00:06:53.610 [2024-05-16 20:05:40.503789] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5208492447423791103 len:18505 00:06:53.610 [2024-05-16 20:05:40.503818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.610 [2024-05-16 20:05:40.503902] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:5208492444341520456 len:18505 00:06:53.610 [2024-05-16 20:05:40.503918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.610 [2024-05-16 20:05:40.504013] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5192316120036034716 len:1 00:06:53.610 [2024-05-16 20:05:40.504025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.610 #45 NEW cov: 12094 ft: 14246 corp: 23/554b lim: 50 exec/s: 45 rss: 73Mb L: 38/39 MS: 1 ShuffleBytes- 00:06:53.610 [2024-05-16 20:05:40.573940] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069603393535 len:65536 00:06:53.611 [2024-05-16 20:05:40.573967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.611 [2024-05-16 20:05:40.574059] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073696116735 len:65536 00:06:53.611 [2024-05-16 20:05:40.574077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.611 [2024-05-16 20:05:40.574149] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:06:53.611 [2024-05-16 20:05:40.574168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.611 #46 NEW cov: 12094 ft: 14271 corp: 24/588b lim: 50 exec/s: 46 rss: 73Mb L: 34/39 MS: 1 InsertRepeatedBytes- 00:06:53.611 [2024-05-16 20:05:40.623957] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1095417921600 len:1 00:06:53.611 [2024-05-16 20:05:40.623985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.611 [2024-05-16 20:05:40.624047] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744069414649855 len:65331 00:06:53.611 [2024-05-16 20:05:40.624064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.611 #47 NEW cov: 12094 ft: 14332 corp: 25/608b lim: 50 exec/s: 47 rss: 73Mb L: 20/39 MS: 1 CopyPart- 00:06:53.611 [2024-05-16 20:05:40.674141] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1095417921600 len:1 00:06:53.611 [2024-05-16 20:05:40.674168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.611 [2024-05-16 20:05:40.674238] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744069414649855 len:65331 00:06:53.611 [2024-05-16 20:05:40.674257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.611 #48 NEW cov: 12094 ft: 14357 corp: 26/629b lim: 50 exec/s: 48 rss: 73Mb L: 21/39 MS: 1 InsertByte- 00:06:53.611 [2024-05-16 20:05:40.724657] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5208492447423791103 len:18505 00:06:53.611 [2024-05-16 20:05:40.724684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.611 [2024-05-16 20:05:40.724761] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:5208492444341520456 len:18505 00:06:53.611 [2024-05-16 20:05:40.724777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.611 [2024-05-16 20:05:40.724862] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:1067353112899504200 len:1 00:06:53.611 [2024-05-16 20:05:40.724873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.611 #49 NEW cov: 12094 ft: 14373 corp: 27/667b lim: 50 exec/s: 49 rss: 73Mb L: 38/39 MS: 1 ShuffleBytes- 00:06:53.868 [2024-05-16 20:05:40.774294] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:7710162562058289152 len:32062 00:06:53.869 [2024-05-16 20:05:40.774322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.869 #51 NEW cov: 12094 ft: 14395 corp: 28/677b lim: 50 exec/s: 51 rss: 73Mb L: 10/39 MS: 2 EraseBytes-InsertByte- 00:06:53.869 [2024-05-16 20:05:40.824749] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:4611686018628661504 len:65281 00:06:53.869 [2024-05-16 20:05:40.824777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.869 [2024-05-16 20:05:40.824841] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744069414584575 len:65536 00:06:53.869 [2024-05-16 20:05:40.824864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.869 #52 NEW cov: 12094 ft: 14405 corp: 29/698b lim: 50 exec/s: 52 rss: 73Mb L: 21/39 MS: 1 InsertByte- 00:06:53.869 [2024-05-16 20:05:40.874942] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1095417931584 len:1 00:06:53.869 [2024-05-16 20:05:40.874968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.869 [2024-05-16 20:05:40.875033] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744069414649855 len:65331 00:06:53.869 [2024-05-16 20:05:40.875047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.869 #53 NEW cov: 12094 ft: 14477 corp: 30/719b lim: 50 exec/s: 53 rss: 73Mb L: 21/39 MS: 1 ChangeByte- 00:06:53.869 [2024-05-16 20:05:40.934966] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:7710162553468354560 len:32062 00:06:53.869 [2024-05-16 20:05:40.934991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.869 #54 NEW cov: 12094 ft: 14558 corp: 31/729b lim: 50 exec/s: 54 rss: 73Mb L: 10/39 MS: 1 ChangeBinInt- 00:06:53.869 [2024-05-16 20:05:40.996068] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5208492447423791103 len:18505 00:06:53.869 [2024-05-16 20:05:40.996094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.869 [2024-05-16 20:05:40.996177] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:5208492444341520456 len:18505 00:06:53.869 [2024-05-16 20:05:40.996191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.869 [2024-05-16 20:05:40.996276] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:1067353112899504200 len:2 00:06:53.869 [2024-05-16 20:05:40.996288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.869 [2024-05-16 20:05:40.996368] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:101632 len:1 00:06:53.869 [2024-05-16 20:05:40.996382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:54.128 #55 NEW cov: 12094 ft: 14776 corp: 32/770b lim: 50 exec/s: 55 rss: 73Mb L: 41/41 MS: 1 CMP- DE: "\001\000\001\215"- 00:06:54.129 [2024-05-16 20:05:41.046478] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5208492447423791103 len:18505 00:06:54.129 [2024-05-16 20:05:41.046503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.129 [2024-05-16 20:05:41.046588] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:5208492444341517128 len:18505 00:06:54.129 [2024-05-16 20:05:41.046603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.129 [2024-05-16 20:05:41.046689] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5192316326194464840 len:1 00:06:54.129 [2024-05-16 20:05:41.046701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:54.129 [2024-05-16 20:05:41.046785] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:8650752 len:1 00:06:54.129 [2024-05-16 20:05:41.046799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:54.129 #56 NEW cov: 12094 ft: 14810 corp: 33/810b lim: 50 exec/s: 56 rss: 74Mb L: 40/41 MS: 1 InsertByte- 00:06:54.129 [2024-05-16 20:05:41.105975] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18014402804386815 len:65281 00:06:54.129 [2024-05-16 20:05:41.106002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.129 #57 NEW cov: 12094 ft: 14831 corp: 34/821b lim: 50 exec/s: 57 rss: 74Mb L: 11/41 MS: 1 CrossOver- 00:06:54.129 [2024-05-16 20:05:41.166884] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5208492447423791103 len:18505 00:06:54.129 [2024-05-16 20:05:41.166909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.129 [2024-05-16 20:05:41.166984] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:5208492444341517128 len:18505 00:06:54.129 [2024-05-16 20:05:41.167000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.129 [2024-05-16 20:05:41.167078] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5192318319059290184 len:1 00:06:54.129 [2024-05-16 20:05:41.167094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:54.129 #58 NEW cov: 12094 ft: 14835 corp: 35/860b lim: 50 exec/s: 29 rss: 74Mb L: 39/41 MS: 1 ChangeBit- 00:06:54.129 #58 DONE cov: 12094 ft: 14835 corp: 35/860b lim: 50 exec/s: 29 rss: 74Mb 00:06:54.129 ###### Recommended dictionary. ###### 00:06:54.129 "\377\377\377\377\377\377\377\177" # Uses: 0 00:06:54.129 "\377\377\377\377" # Uses: 3 00:06:54.129 "\001\000\001\215" # Uses: 0 00:06:54.129 ###### End of recommended dictionary. ###### 00:06:54.129 Done 58 runs in 2 second(s) 00:06:54.129 [2024-05-16 20:05:41.189555] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:54.388 20:05:41 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_19.conf /var/tmp/suppress_nvmf_fuzz 00:06:54.388 20:05:41 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:54.388 20:05:41 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:54.388 20:05:41 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 20 1 0x1 00:06:54.388 20:05:41 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=20 00:06:54.388 20:05:41 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:54.388 20:05:41 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:54.388 20:05:41 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:06:54.388 20:05:41 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_20.conf 00:06:54.388 20:05:41 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:54.388 20:05:41 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:54.388 20:05:41 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 20 00:06:54.388 20:05:41 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4420 00:06:54.388 20:05:41 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:06:54.388 20:05:41 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' 00:06:54.388 20:05:41 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4420"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:54.388 20:05:41 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:54.388 20:05:41 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:54.388 20:05:41 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' -c /tmp/fuzz_json_20.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 -Z 20 00:06:54.388 [2024-05-16 20:05:41.348078] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:54.388 [2024-05-16 20:05:41.348144] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1671295 ] 00:06:54.388 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.388 [2024-05-16 20:05:41.506904] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.647 [2024-05-16 20:05:41.571415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.647 [2024-05-16 20:05:41.629778] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:54.647 [2024-05-16 20:05:41.645737] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:54.647 [2024-05-16 20:05:41.646094] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:54.647 INFO: Running with entropic power schedule (0xFF, 100). 00:06:54.647 INFO: Seed: 756050115 00:06:54.647 INFO: Loaded 1 modules (357283 inline 8-bit counters): 357283 [0x299c0cc, 0x29f346f), 00:06:54.647 INFO: Loaded 1 PC tables (357283 PCs): 357283 [0x29f3470,0x2f66ea0), 00:06:54.647 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:06:54.647 INFO: A corpus is not provided, starting from an empty corpus 00:06:54.647 #2 INITED exec/s: 0 rss: 63Mb 00:06:54.647 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:54.647 This may also happen if the target rejected all inputs we tried so far 00:06:54.647 [2024-05-16 20:05:41.691716] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:54.647 [2024-05-16 20:05:41.691747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.647 [2024-05-16 20:05:41.691788] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:54.647 [2024-05-16 20:05:41.691801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.647 [2024-05-16 20:05:41.691856] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:54.647 [2024-05-16 20:05:41.691870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:54.647 [2024-05-16 20:05:41.691924] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:54.647 [2024-05-16 20:05:41.691937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:54.906 NEW_FUNC[1/687]: 0x4a4fd0 in fuzz_nvm_reservation_acquire_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:597 00:06:54.906 NEW_FUNC[2/687]: 0x4bf420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:54.906 #7 NEW cov: 11907 ft: 11908 corp: 2/87b lim: 90 exec/s: 0 rss: 71Mb L: 86/86 MS: 5 InsertByte-ShuffleBytes-CopyPart-ChangeBinInt-InsertRepeatedBytes- 00:06:54.906 [2024-05-16 20:05:41.841838] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:54.906 [2024-05-16 20:05:41.841874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.906 [2024-05-16 20:05:41.841930] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:54.906 [2024-05-16 20:05:41.841946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.906 [2024-05-16 20:05:41.842000] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:54.906 [2024-05-16 20:05:41.842013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:54.906 #8 NEW cov: 12038 ft: 12879 corp: 3/158b lim: 90 exec/s: 0 rss: 71Mb L: 71/86 MS: 1 CrossOver- 00:06:54.906 [2024-05-16 20:05:41.881863] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:54.906 [2024-05-16 20:05:41.881891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.906 [2024-05-16 20:05:41.881928] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:54.906 [2024-05-16 20:05:41.881940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.906 [2024-05-16 20:05:41.881992] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:54.906 [2024-05-16 20:05:41.882004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:54.906 #9 NEW cov: 12044 ft: 13017 corp: 4/229b lim: 90 exec/s: 0 rss: 71Mb L: 71/86 MS: 1 ChangeBit- 00:06:54.906 [2024-05-16 20:05:41.932012] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:54.906 [2024-05-16 20:05:41.932040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.907 [2024-05-16 20:05:41.932078] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:54.907 [2024-05-16 20:05:41.932091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.907 [2024-05-16 20:05:41.932142] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:54.907 [2024-05-16 20:05:41.932155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:54.907 #10 NEW cov: 12129 ft: 13292 corp: 5/299b lim: 90 exec/s: 0 rss: 72Mb L: 70/86 MS: 1 EraseBytes- 00:06:54.907 [2024-05-16 20:05:41.982177] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:54.907 [2024-05-16 20:05:41.982201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.907 [2024-05-16 20:05:41.982254] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:54.907 [2024-05-16 20:05:41.982266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.907 [2024-05-16 20:05:41.982315] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:54.907 [2024-05-16 20:05:41.982327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:54.907 #11 NEW cov: 12129 ft: 13346 corp: 6/369b lim: 90 exec/s: 0 rss: 72Mb L: 70/86 MS: 1 CrossOver- 00:06:54.907 [2024-05-16 20:05:42.032294] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:54.907 [2024-05-16 20:05:42.032318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.907 [2024-05-16 20:05:42.032368] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:54.907 [2024-05-16 20:05:42.032380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.907 [2024-05-16 20:05:42.032431] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:54.907 [2024-05-16 20:05:42.032444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:55.166 #12 NEW cov: 12129 ft: 13471 corp: 7/439b lim: 90 exec/s: 0 rss: 72Mb L: 70/86 MS: 1 ChangeByte- 00:06:55.166 [2024-05-16 20:05:42.072259] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:55.166 [2024-05-16 20:05:42.072283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.166 [2024-05-16 20:05:42.072321] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:55.166 [2024-05-16 20:05:42.072334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.166 #13 NEW cov: 12129 ft: 13916 corp: 8/486b lim: 90 exec/s: 0 rss: 72Mb L: 47/86 MS: 1 InsertRepeatedBytes- 00:06:55.166 [2024-05-16 20:05:42.112529] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:55.166 [2024-05-16 20:05:42.112554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.166 [2024-05-16 20:05:42.112599] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:55.166 [2024-05-16 20:05:42.112611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.166 [2024-05-16 20:05:42.112663] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:55.166 [2024-05-16 20:05:42.112677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:55.166 #14 NEW cov: 12129 ft: 13978 corp: 9/557b lim: 90 exec/s: 0 rss: 72Mb L: 71/86 MS: 1 InsertByte- 00:06:55.166 [2024-05-16 20:05:42.162832] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:55.166 [2024-05-16 20:05:42.162856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.166 [2024-05-16 20:05:42.162913] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:55.166 [2024-05-16 20:05:42.162926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.166 [2024-05-16 20:05:42.162994] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:55.166 [2024-05-16 20:05:42.163007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:55.166 [2024-05-16 20:05:42.163062] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:55.166 [2024-05-16 20:05:42.163075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:55.166 #15 NEW cov: 12129 ft: 14012 corp: 10/634b lim: 90 exec/s: 0 rss: 72Mb L: 77/86 MS: 1 InsertRepeatedBytes- 00:06:55.166 [2024-05-16 20:05:42.202749] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:55.166 [2024-05-16 20:05:42.202774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.166 [2024-05-16 20:05:42.202824] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:55.166 [2024-05-16 20:05:42.202835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.166 [2024-05-16 20:05:42.202887] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:55.166 [2024-05-16 20:05:42.202898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:55.166 #16 NEW cov: 12129 ft: 14104 corp: 11/705b lim: 90 exec/s: 0 rss: 72Mb L: 71/86 MS: 1 ChangeBinInt- 00:06:55.166 [2024-05-16 20:05:42.242934] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:55.166 [2024-05-16 20:05:42.242962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.166 [2024-05-16 20:05:42.243005] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:55.166 [2024-05-16 20:05:42.243017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.166 [2024-05-16 20:05:42.243068] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:55.166 [2024-05-16 20:05:42.243080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:55.166 #22 NEW cov: 12129 ft: 14139 corp: 12/776b lim: 90 exec/s: 0 rss: 72Mb L: 71/86 MS: 1 InsertByte- 00:06:55.166 [2024-05-16 20:05:42.283007] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:55.166 [2024-05-16 20:05:42.283031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.166 [2024-05-16 20:05:42.283070] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:55.166 [2024-05-16 20:05:42.283082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.166 [2024-05-16 20:05:42.283132] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:55.166 [2024-05-16 20:05:42.283144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:55.166 #23 NEW cov: 12129 ft: 14164 corp: 13/834b lim: 90 exec/s: 0 rss: 72Mb L: 58/86 MS: 1 EraseBytes- 00:06:55.425 [2024-05-16 20:05:42.323324] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:55.425 [2024-05-16 20:05:42.323348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.425 [2024-05-16 20:05:42.323405] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:55.425 [2024-05-16 20:05:42.323418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.425 [2024-05-16 20:05:42.323470] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:55.425 [2024-05-16 20:05:42.323483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:55.425 [2024-05-16 20:05:42.323535] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:55.425 [2024-05-16 20:05:42.323547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:55.425 #24 NEW cov: 12129 ft: 14193 corp: 14/914b lim: 90 exec/s: 0 rss: 72Mb L: 80/86 MS: 1 CrossOver- 00:06:55.425 [2024-05-16 20:05:42.373127] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:55.425 [2024-05-16 20:05:42.373153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.425 [2024-05-16 20:05:42.373188] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:55.425 [2024-05-16 20:05:42.373200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.425 #25 NEW cov: 12129 ft: 14238 corp: 15/959b lim: 90 exec/s: 0 rss: 72Mb L: 45/86 MS: 1 EraseBytes- 00:06:55.425 [2024-05-16 20:05:42.423444] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:55.425 [2024-05-16 20:05:42.423472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.425 [2024-05-16 20:05:42.423513] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:55.425 [2024-05-16 20:05:42.423527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.425 [2024-05-16 20:05:42.423593] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:55.425 [2024-05-16 20:05:42.423606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:55.425 #26 NEW cov: 12129 ft: 14245 corp: 16/1030b lim: 90 exec/s: 0 rss: 72Mb L: 71/86 MS: 1 InsertByte- 00:06:55.425 [2024-05-16 20:05:42.473611] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:55.425 [2024-05-16 20:05:42.473636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.425 [2024-05-16 20:05:42.473687] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:55.425 [2024-05-16 20:05:42.473700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.425 [2024-05-16 20:05:42.473751] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:55.425 [2024-05-16 20:05:42.473764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:55.425 #27 NEW cov: 12129 ft: 14249 corp: 17/1100b lim: 90 exec/s: 0 rss: 72Mb L: 70/86 MS: 1 ChangeBit- 00:06:55.425 [2024-05-16 20:05:42.513552] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:55.425 [2024-05-16 20:05:42.513577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.425 [2024-05-16 20:05:42.513616] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:55.425 [2024-05-16 20:05:42.513629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.425 #33 NEW cov: 12129 ft: 14305 corp: 18/1149b lim: 90 exec/s: 0 rss: 72Mb L: 49/86 MS: 1 EraseBytes- 00:06:55.426 [2024-05-16 20:05:42.564022] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:55.426 [2024-05-16 20:05:42.564046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.426 [2024-05-16 20:05:42.564098] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:55.426 [2024-05-16 20:05:42.564111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.426 [2024-05-16 20:05:42.564162] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:55.426 [2024-05-16 20:05:42.564174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:55.426 [2024-05-16 20:05:42.564225] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:55.426 [2024-05-16 20:05:42.564239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:55.685 NEW_FUNC[1/1]: 0x1a6ef60 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:55.685 #34 NEW cov: 12152 ft: 14344 corp: 19/1230b lim: 90 exec/s: 0 rss: 72Mb L: 81/86 MS: 1 CopyPart- 00:06:55.685 [2024-05-16 20:05:42.603932] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:55.685 [2024-05-16 20:05:42.603956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.685 [2024-05-16 20:05:42.604003] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:55.685 [2024-05-16 20:05:42.604015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.685 [2024-05-16 20:05:42.604068] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:55.685 [2024-05-16 20:05:42.604081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:55.685 #35 NEW cov: 12152 ft: 14365 corp: 20/1300b lim: 90 exec/s: 0 rss: 72Mb L: 70/86 MS: 1 ChangeBinInt- 00:06:55.685 [2024-05-16 20:05:42.654092] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:55.685 [2024-05-16 20:05:42.654116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.685 [2024-05-16 20:05:42.654167] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:55.685 [2024-05-16 20:05:42.654180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.685 [2024-05-16 20:05:42.654229] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:55.685 [2024-05-16 20:05:42.654242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:55.685 #36 NEW cov: 12152 ft: 14401 corp: 21/1370b lim: 90 exec/s: 0 rss: 72Mb L: 70/86 MS: 1 ShuffleBytes- 00:06:55.685 [2024-05-16 20:05:42.694526] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:55.685 [2024-05-16 20:05:42.694550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.685 [2024-05-16 20:05:42.694601] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:55.685 [2024-05-16 20:05:42.694611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.685 [2024-05-16 20:05:42.694680] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:55.685 [2024-05-16 20:05:42.694693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:55.685 [2024-05-16 20:05:42.694746] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:55.685 [2024-05-16 20:05:42.694759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:55.685 [2024-05-16 20:05:42.694811] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:06:55.685 [2024-05-16 20:05:42.694824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:55.685 #40 NEW cov: 12152 ft: 14445 corp: 22/1460b lim: 90 exec/s: 40 rss: 72Mb L: 90/90 MS: 4 ChangeByte-InsertByte-ChangeByte-InsertRepeatedBytes- 00:06:55.685 [2024-05-16 20:05:42.734160] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:55.685 [2024-05-16 20:05:42.734185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.685 [2024-05-16 20:05:42.734223] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:55.685 [2024-05-16 20:05:42.734237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.685 #41 NEW cov: 12152 ft: 14473 corp: 23/1509b lim: 90 exec/s: 41 rss: 73Mb L: 49/90 MS: 1 ChangeByte- 00:06:55.685 [2024-05-16 20:05:42.784451] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:55.685 [2024-05-16 20:05:42.784481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.685 [2024-05-16 20:05:42.784531] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:55.685 [2024-05-16 20:05:42.784543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.685 [2024-05-16 20:05:42.784596] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:55.685 [2024-05-16 20:05:42.784608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:55.685 #42 NEW cov: 12152 ft: 14479 corp: 24/1580b lim: 90 exec/s: 42 rss: 73Mb L: 71/90 MS: 1 ChangeBit- 00:06:55.685 [2024-05-16 20:05:42.824562] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:55.685 [2024-05-16 20:05:42.824585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.685 [2024-05-16 20:05:42.824636] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:55.685 [2024-05-16 20:05:42.824649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.685 [2024-05-16 20:05:42.824700] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:55.685 [2024-05-16 20:05:42.824712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:55.945 #43 NEW cov: 12152 ft: 14553 corp: 25/1645b lim: 90 exec/s: 43 rss: 73Mb L: 65/90 MS: 1 EraseBytes- 00:06:55.945 [2024-05-16 20:05:42.874701] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:55.945 [2024-05-16 20:05:42.874726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.945 [2024-05-16 20:05:42.874775] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:55.945 [2024-05-16 20:05:42.874788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.945 [2024-05-16 20:05:42.874839] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:55.945 [2024-05-16 20:05:42.874851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:55.945 #44 NEW cov: 12152 ft: 14567 corp: 26/1715b lim: 90 exec/s: 44 rss: 73Mb L: 70/90 MS: 1 ChangeByte- 00:06:55.945 [2024-05-16 20:05:42.914505] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:55.945 [2024-05-16 20:05:42.914530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.945 #45 NEW cov: 12152 ft: 15394 corp: 27/1748b lim: 90 exec/s: 45 rss: 73Mb L: 33/90 MS: 1 EraseBytes- 00:06:55.945 [2024-05-16 20:05:42.964960] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:55.945 [2024-05-16 20:05:42.964984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.945 [2024-05-16 20:05:42.965036] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:55.945 [2024-05-16 20:05:42.965048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.945 [2024-05-16 20:05:42.965116] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:55.945 [2024-05-16 20:05:42.965129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:55.945 #46 NEW cov: 12152 ft: 15413 corp: 28/1819b lim: 90 exec/s: 46 rss: 73Mb L: 71/90 MS: 1 ChangeByte- 00:06:55.945 [2024-05-16 20:05:43.005056] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:55.945 [2024-05-16 20:05:43.005080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.945 [2024-05-16 20:05:43.005147] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:55.945 [2024-05-16 20:05:43.005160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.945 [2024-05-16 20:05:43.005209] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:55.945 [2024-05-16 20:05:43.005222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:55.945 #47 NEW cov: 12152 ft: 15439 corp: 29/1890b lim: 90 exec/s: 47 rss: 73Mb L: 71/90 MS: 1 ChangeBinInt- 00:06:55.945 [2024-05-16 20:05:43.055526] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:55.945 [2024-05-16 20:05:43.055552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.945 [2024-05-16 20:05:43.055602] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:55.945 [2024-05-16 20:05:43.055615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.945 [2024-05-16 20:05:43.055663] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:55.945 [2024-05-16 20:05:43.055676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:55.945 [2024-05-16 20:05:43.055725] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:55.945 [2024-05-16 20:05:43.055738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:55.945 [2024-05-16 20:05:43.055788] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:06:55.945 [2024-05-16 20:05:43.055800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:55.945 #48 NEW cov: 12152 ft: 15460 corp: 30/1980b lim: 90 exec/s: 48 rss: 73Mb L: 90/90 MS: 1 ChangeBinInt- 00:06:56.205 [2024-05-16 20:05:43.105542] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:56.205 [2024-05-16 20:05:43.105569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.205 [2024-05-16 20:05:43.105636] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:56.205 [2024-05-16 20:05:43.105649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.205 [2024-05-16 20:05:43.105699] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:56.205 [2024-05-16 20:05:43.105712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.205 [2024-05-16 20:05:43.105761] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:56.205 [2024-05-16 20:05:43.105774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:56.205 #49 NEW cov: 12152 ft: 15517 corp: 31/2068b lim: 90 exec/s: 49 rss: 74Mb L: 88/90 MS: 1 InsertRepeatedBytes- 00:06:56.205 [2024-05-16 20:05:43.155691] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:56.205 [2024-05-16 20:05:43.155720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.205 [2024-05-16 20:05:43.155765] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:56.205 [2024-05-16 20:05:43.155778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.205 [2024-05-16 20:05:43.155825] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:56.205 [2024-05-16 20:05:43.155838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.205 [2024-05-16 20:05:43.155887] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:56.205 [2024-05-16 20:05:43.155899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:56.205 #50 NEW cov: 12152 ft: 15530 corp: 32/2153b lim: 90 exec/s: 50 rss: 74Mb L: 85/90 MS: 1 InsertRepeatedBytes- 00:06:56.205 [2024-05-16 20:05:43.195485] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:56.205 [2024-05-16 20:05:43.195510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.205 [2024-05-16 20:05:43.195550] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:56.205 [2024-05-16 20:05:43.195563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.205 #51 NEW cov: 12152 ft: 15593 corp: 33/2200b lim: 90 exec/s: 51 rss: 74Mb L: 47/90 MS: 1 ChangeBit- 00:06:56.205 [2024-05-16 20:05:43.235945] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:56.205 [2024-05-16 20:05:43.235970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.205 [2024-05-16 20:05:43.236027] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:56.205 [2024-05-16 20:05:43.236040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.205 [2024-05-16 20:05:43.236090] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:56.205 [2024-05-16 20:05:43.236102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.205 [2024-05-16 20:05:43.236152] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:56.205 [2024-05-16 20:05:43.236166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:56.205 #52 NEW cov: 12152 ft: 15597 corp: 34/2281b lim: 90 exec/s: 52 rss: 74Mb L: 81/90 MS: 1 ChangeBit- 00:06:56.205 [2024-05-16 20:05:43.286081] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:56.205 [2024-05-16 20:05:43.286107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.205 [2024-05-16 20:05:43.286163] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:56.205 [2024-05-16 20:05:43.286175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.205 [2024-05-16 20:05:43.286224] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:56.205 [2024-05-16 20:05:43.286236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.205 [2024-05-16 20:05:43.286287] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:56.205 [2024-05-16 20:05:43.286300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:56.205 #53 NEW cov: 12152 ft: 15604 corp: 35/2365b lim: 90 exec/s: 53 rss: 74Mb L: 84/90 MS: 1 CopyPart- 00:06:56.205 [2024-05-16 20:05:43.325853] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:56.205 [2024-05-16 20:05:43.325878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.205 [2024-05-16 20:05:43.325921] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:56.205 [2024-05-16 20:05:43.325934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.465 #54 NEW cov: 12152 ft: 15668 corp: 36/2414b lim: 90 exec/s: 54 rss: 74Mb L: 49/90 MS: 1 ChangeBinInt- 00:06:56.465 [2024-05-16 20:05:43.376026] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:56.465 [2024-05-16 20:05:43.376052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.465 [2024-05-16 20:05:43.376088] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:56.465 [2024-05-16 20:05:43.376101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.465 #55 NEW cov: 12152 ft: 15693 corp: 37/2463b lim: 90 exec/s: 55 rss: 74Mb L: 49/90 MS: 1 ShuffleBytes- 00:06:56.465 [2024-05-16 20:05:43.426154] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:56.465 [2024-05-16 20:05:43.426177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.465 [2024-05-16 20:05:43.426216] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:56.465 [2024-05-16 20:05:43.426229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.465 #56 NEW cov: 12152 ft: 15698 corp: 38/2500b lim: 90 exec/s: 56 rss: 74Mb L: 37/90 MS: 1 CMP- DE: "\001\000\000\000"- 00:06:56.465 [2024-05-16 20:05:43.476421] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:56.465 [2024-05-16 20:05:43.476446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.465 [2024-05-16 20:05:43.476518] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:56.465 [2024-05-16 20:05:43.476530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.465 [2024-05-16 20:05:43.476582] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:56.465 [2024-05-16 20:05:43.476595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.465 #57 NEW cov: 12152 ft: 15701 corp: 39/2563b lim: 90 exec/s: 57 rss: 74Mb L: 63/90 MS: 1 EraseBytes- 00:06:56.465 [2024-05-16 20:05:43.516702] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:56.465 [2024-05-16 20:05:43.516725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.465 [2024-05-16 20:05:43.516772] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:56.465 [2024-05-16 20:05:43.516782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.465 [2024-05-16 20:05:43.516847] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:56.465 [2024-05-16 20:05:43.516862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.465 [2024-05-16 20:05:43.516912] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:56.465 [2024-05-16 20:05:43.516925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:56.465 #58 NEW cov: 12152 ft: 15713 corp: 40/2647b lim: 90 exec/s: 58 rss: 74Mb L: 84/90 MS: 1 PersAutoDict- DE: "\001\000\000\000"- 00:06:56.465 [2024-05-16 20:05:43.556541] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:56.465 [2024-05-16 20:05:43.556565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.465 [2024-05-16 20:05:43.556604] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:56.465 [2024-05-16 20:05:43.556615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.465 #59 NEW cov: 12152 ft: 15719 corp: 41/2692b lim: 90 exec/s: 59 rss: 74Mb L: 45/90 MS: 1 ChangeByte- 00:06:56.465 [2024-05-16 20:05:43.607002] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:56.465 [2024-05-16 20:05:43.607026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.465 [2024-05-16 20:05:43.607076] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:56.465 [2024-05-16 20:05:43.607087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.465 [2024-05-16 20:05:43.607138] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:56.465 [2024-05-16 20:05:43.607150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.465 [2024-05-16 20:05:43.607202] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:56.465 [2024-05-16 20:05:43.607214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:56.725 #60 NEW cov: 12152 ft: 15722 corp: 42/2777b lim: 90 exec/s: 60 rss: 74Mb L: 85/90 MS: 1 CrossOver- 00:06:56.725 [2024-05-16 20:05:43.656990] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:56.725 [2024-05-16 20:05:43.657012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.725 [2024-05-16 20:05:43.657064] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:56.725 [2024-05-16 20:05:43.657077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.725 [2024-05-16 20:05:43.657128] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:56.725 [2024-05-16 20:05:43.657140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.725 #61 NEW cov: 12152 ft: 15726 corp: 43/2848b lim: 90 exec/s: 61 rss: 74Mb L: 71/90 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\000"- 00:06:56.725 [2024-05-16 20:05:43.697203] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:56.725 [2024-05-16 20:05:43.697229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.725 [2024-05-16 20:05:43.697280] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:56.725 [2024-05-16 20:05:43.697296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.725 [2024-05-16 20:05:43.697345] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:56.725 [2024-05-16 20:05:43.697358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.725 [2024-05-16 20:05:43.697408] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:56.725 [2024-05-16 20:05:43.697421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:56.725 #62 NEW cov: 12152 ft: 15733 corp: 44/2926b lim: 90 exec/s: 31 rss: 74Mb L: 78/90 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:06:56.725 #62 DONE cov: 12152 ft: 15733 corp: 44/2926b lim: 90 exec/s: 31 rss: 74Mb 00:06:56.725 ###### Recommended dictionary. ###### 00:06:56.725 "\001\000\000\000" # Uses: 1 00:06:56.725 "\000\000\000\000\000\000\000\000" # Uses: 1 00:06:56.725 ###### End of recommended dictionary. ###### 00:06:56.725 Done 62 runs in 2 second(s) 00:06:56.725 [2024-05-16 20:05:43.717090] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:56.725 20:05:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_20.conf /var/tmp/suppress_nvmf_fuzz 00:06:56.725 20:05:43 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:56.725 20:05:43 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:56.725 20:05:43 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 21 1 0x1 00:06:56.725 20:05:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=21 00:06:56.725 20:05:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:56.725 20:05:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:56.725 20:05:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:06:56.725 20:05:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_21.conf 00:06:56.725 20:05:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:56.725 20:05:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:56.725 20:05:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 21 00:06:56.725 20:05:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4421 00:06:56.725 20:05:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:06:56.725 20:05:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' 00:06:56.725 20:05:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4421"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:56.725 20:05:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:56.725 20:05:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:56.725 20:05:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' -c /tmp/fuzz_json_21.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 -Z 21 00:06:56.984 [2024-05-16 20:05:43.880699] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:56.984 [2024-05-16 20:05:43.880777] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1671740 ] 00:06:56.984 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.984 [2024-05-16 20:05:44.042435] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.984 [2024-05-16 20:05:44.106301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.243 [2024-05-16 20:05:44.165063] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:57.243 [2024-05-16 20:05:44.181027] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:57.243 [2024-05-16 20:05:44.181387] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 *** 00:06:57.243 INFO: Running with entropic power schedule (0xFF, 100). 00:06:57.243 INFO: Seed: 3291031164 00:06:57.243 INFO: Loaded 1 modules (357283 inline 8-bit counters): 357283 [0x299c0cc, 0x29f346f), 00:06:57.243 INFO: Loaded 1 PC tables (357283 PCs): 357283 [0x29f3470,0x2f66ea0), 00:06:57.243 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:06:57.243 INFO: A corpus is not provided, starting from an empty corpus 00:06:57.243 #2 INITED exec/s: 0 rss: 64Mb 00:06:57.243 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:57.243 This may also happen if the target rejected all inputs we tried so far 00:06:57.243 [2024-05-16 20:05:44.226642] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:57.243 [2024-05-16 20:05:44.226671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.243 [2024-05-16 20:05:44.226719] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:57.243 [2024-05-16 20:05:44.226733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.243 NEW_FUNC[1/687]: 0x4a81f0 in fuzz_nvm_reservation_release_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:623 00:06:57.243 NEW_FUNC[2/687]: 0x4bf420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:57.243 #4 NEW cov: 11883 ft: 11883 corp: 2/22b lim: 50 exec/s: 0 rss: 71Mb L: 21/21 MS: 2 CrossOver-InsertRepeatedBytes- 00:06:57.243 [2024-05-16 20:05:44.376989] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:57.243 [2024-05-16 20:05:44.377020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.243 [2024-05-16 20:05:44.377073] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:57.243 [2024-05-16 20:05:44.377088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.502 #5 NEW cov: 12013 ft: 12409 corp: 3/42b lim: 50 exec/s: 0 rss: 71Mb L: 20/21 MS: 1 EraseBytes- 00:06:57.502 [2024-05-16 20:05:44.426918] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:57.502 [2024-05-16 20:05:44.426945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.502 #6 NEW cov: 12019 ft: 13431 corp: 4/54b lim: 50 exec/s: 0 rss: 71Mb L: 12/21 MS: 1 EraseBytes- 00:06:57.502 [2024-05-16 20:05:44.467314] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:57.502 [2024-05-16 20:05:44.467340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.502 [2024-05-16 20:05:44.467403] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:57.502 [2024-05-16 20:05:44.467417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.502 [2024-05-16 20:05:44.467473] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:06:57.502 [2024-05-16 20:05:44.467487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.502 #7 NEW cov: 12104 ft: 13960 corp: 5/87b lim: 50 exec/s: 0 rss: 71Mb L: 33/33 MS: 1 CrossOver- 00:06:57.502 [2024-05-16 20:05:44.507292] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:57.502 [2024-05-16 20:05:44.507317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.502 [2024-05-16 20:05:44.507372] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:57.502 [2024-05-16 20:05:44.507386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.502 #8 NEW cov: 12104 ft: 14091 corp: 6/108b lim: 50 exec/s: 0 rss: 71Mb L: 21/33 MS: 1 CrossOver- 00:06:57.502 [2024-05-16 20:05:44.547259] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:57.502 [2024-05-16 20:05:44.547284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.502 #9 NEW cov: 12104 ft: 14161 corp: 7/118b lim: 50 exec/s: 0 rss: 72Mb L: 10/33 MS: 1 EraseBytes- 00:06:57.502 [2024-05-16 20:05:44.597394] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:57.502 [2024-05-16 20:05:44.597420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.502 #10 NEW cov: 12104 ft: 14227 corp: 8/129b lim: 50 exec/s: 0 rss: 72Mb L: 11/33 MS: 1 InsertByte- 00:06:57.502 [2024-05-16 20:05:44.647750] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:57.502 [2024-05-16 20:05:44.647776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.502 [2024-05-16 20:05:44.647822] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:57.502 [2024-05-16 20:05:44.647837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.762 #11 NEW cov: 12104 ft: 14254 corp: 9/156b lim: 50 exec/s: 0 rss: 72Mb L: 27/33 MS: 1 InsertRepeatedBytes- 00:06:57.762 [2024-05-16 20:05:44.697683] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:57.762 [2024-05-16 20:05:44.697708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.762 #12 NEW cov: 12104 ft: 14318 corp: 10/168b lim: 50 exec/s: 0 rss: 72Mb L: 12/33 MS: 1 ChangeBinInt- 00:06:57.762 [2024-05-16 20:05:44.737925] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:57.762 [2024-05-16 20:05:44.737950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.762 [2024-05-16 20:05:44.738010] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:57.762 [2024-05-16 20:05:44.738024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.762 #13 NEW cov: 12104 ft: 14422 corp: 11/189b lim: 50 exec/s: 0 rss: 72Mb L: 21/33 MS: 1 CrossOver- 00:06:57.762 [2024-05-16 20:05:44.788067] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:57.762 [2024-05-16 20:05:44.788093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.762 [2024-05-16 20:05:44.788135] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:57.762 [2024-05-16 20:05:44.788148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.762 #14 NEW cov: 12104 ft: 14448 corp: 12/210b lim: 50 exec/s: 0 rss: 72Mb L: 21/33 MS: 1 ShuffleBytes- 00:06:57.762 [2024-05-16 20:05:44.828519] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:57.762 [2024-05-16 20:05:44.828544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.762 [2024-05-16 20:05:44.828613] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:57.762 [2024-05-16 20:05:44.828626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.762 [2024-05-16 20:05:44.828678] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:06:57.762 [2024-05-16 20:05:44.828692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.762 [2024-05-16 20:05:44.828745] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:06:57.762 [2024-05-16 20:05:44.828758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:57.762 #15 NEW cov: 12104 ft: 14847 corp: 13/255b lim: 50 exec/s: 0 rss: 72Mb L: 45/45 MS: 1 InsertRepeatedBytes- 00:06:57.762 [2024-05-16 20:05:44.878499] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:57.762 [2024-05-16 20:05:44.878523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.762 [2024-05-16 20:05:44.878567] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:57.762 [2024-05-16 20:05:44.878579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.762 [2024-05-16 20:05:44.878629] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:06:57.762 [2024-05-16 20:05:44.878642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.762 #16 NEW cov: 12104 ft: 14900 corp: 14/289b lim: 50 exec/s: 0 rss: 72Mb L: 34/45 MS: 1 CrossOver- 00:06:58.022 [2024-05-16 20:05:44.918436] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:58.022 [2024-05-16 20:05:44.918465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.022 [2024-05-16 20:05:44.918501] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:58.022 [2024-05-16 20:05:44.918514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.022 #17 NEW cov: 12104 ft: 14918 corp: 15/310b lim: 50 exec/s: 0 rss: 72Mb L: 21/45 MS: 1 InsertRepeatedBytes- 00:06:58.022 [2024-05-16 20:05:44.958717] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:58.022 [2024-05-16 20:05:44.958744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.022 [2024-05-16 20:05:44.958782] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:58.022 [2024-05-16 20:05:44.958795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.022 [2024-05-16 20:05:44.958863] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:06:58.022 [2024-05-16 20:05:44.958876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.022 #18 NEW cov: 12104 ft: 14923 corp: 16/344b lim: 50 exec/s: 0 rss: 72Mb L: 34/45 MS: 1 CopyPart- 00:06:58.022 [2024-05-16 20:05:44.998837] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:58.022 [2024-05-16 20:05:44.998864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.022 [2024-05-16 20:05:44.998903] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:58.022 [2024-05-16 20:05:44.998916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.022 [2024-05-16 20:05:44.998970] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:06:58.022 [2024-05-16 20:05:44.998983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.022 #19 NEW cov: 12104 ft: 14931 corp: 17/378b lim: 50 exec/s: 0 rss: 72Mb L: 34/45 MS: 1 ChangeBit- 00:06:58.022 [2024-05-16 20:05:45.048661] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:58.022 [2024-05-16 20:05:45.048687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.022 #20 NEW cov: 12104 ft: 14952 corp: 18/396b lim: 50 exec/s: 0 rss: 72Mb L: 18/45 MS: 1 EraseBytes- 00:06:58.022 [2024-05-16 20:05:45.098819] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:58.022 [2024-05-16 20:05:45.098844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.022 NEW_FUNC[1/1]: 0x1a6ef60 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:58.022 #21 NEW cov: 12127 ft: 15000 corp: 19/409b lim: 50 exec/s: 0 rss: 72Mb L: 13/45 MS: 1 InsertByte- 00:06:58.022 [2024-05-16 20:05:45.149443] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:58.022 [2024-05-16 20:05:45.149472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.022 [2024-05-16 20:05:45.149541] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:58.022 [2024-05-16 20:05:45.149555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.022 [2024-05-16 20:05:45.149609] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:06:58.022 [2024-05-16 20:05:45.149634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.022 [2024-05-16 20:05:45.149687] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:06:58.022 [2024-05-16 20:05:45.149700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:58.281 #22 NEW cov: 12127 ft: 15030 corp: 20/457b lim: 50 exec/s: 0 rss: 72Mb L: 48/48 MS: 1 InsertRepeatedBytes- 00:06:58.281 [2024-05-16 20:05:45.189038] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:58.281 [2024-05-16 20:05:45.189061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.281 #23 NEW cov: 12127 ft: 15036 corp: 21/467b lim: 50 exec/s: 0 rss: 72Mb L: 10/48 MS: 1 ChangeByte- 00:06:58.281 [2024-05-16 20:05:45.229373] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:58.281 [2024-05-16 20:05:45.229398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.281 [2024-05-16 20:05:45.229435] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:58.281 [2024-05-16 20:05:45.229448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.281 #24 NEW cov: 12127 ft: 15045 corp: 22/494b lim: 50 exec/s: 24 rss: 72Mb L: 27/48 MS: 1 CrossOver- 00:06:58.281 [2024-05-16 20:05:45.279936] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:58.281 [2024-05-16 20:05:45.279961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.281 [2024-05-16 20:05:45.280010] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:58.281 [2024-05-16 20:05:45.280020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.281 [2024-05-16 20:05:45.280069] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:06:58.281 [2024-05-16 20:05:45.280081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.281 [2024-05-16 20:05:45.280130] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:06:58.281 [2024-05-16 20:05:45.280143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:58.281 [2024-05-16 20:05:45.280195] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:06:58.281 [2024-05-16 20:05:45.280207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:58.281 #25 NEW cov: 12127 ft: 15083 corp: 23/544b lim: 50 exec/s: 25 rss: 72Mb L: 50/50 MS: 1 CrossOver- 00:06:58.281 [2024-05-16 20:05:45.329923] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:58.281 [2024-05-16 20:05:45.329947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.281 [2024-05-16 20:05:45.329999] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:58.281 [2024-05-16 20:05:45.330028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.281 [2024-05-16 20:05:45.330081] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:06:58.281 [2024-05-16 20:05:45.330094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.281 [2024-05-16 20:05:45.330150] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:06:58.281 [2024-05-16 20:05:45.330162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:58.281 #26 NEW cov: 12127 ft: 15093 corp: 24/590b lim: 50 exec/s: 26 rss: 73Mb L: 46/50 MS: 1 InsertByte- 00:06:58.281 [2024-05-16 20:05:45.379761] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:58.281 [2024-05-16 20:05:45.379785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.281 [2024-05-16 20:05:45.379824] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:58.281 [2024-05-16 20:05:45.379836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.281 #27 NEW cov: 12127 ft: 15094 corp: 25/611b lim: 50 exec/s: 27 rss: 73Mb L: 21/50 MS: 1 CopyPart- 00:06:58.541 [2024-05-16 20:05:45.429827] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:58.541 [2024-05-16 20:05:45.429853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.541 #28 NEW cov: 12127 ft: 15178 corp: 26/623b lim: 50 exec/s: 28 rss: 73Mb L: 12/50 MS: 1 ChangeByte- 00:06:58.541 [2024-05-16 20:05:45.470156] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:58.541 [2024-05-16 20:05:45.470183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.541 [2024-05-16 20:05:45.470219] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:58.541 [2024-05-16 20:05:45.470231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.541 [2024-05-16 20:05:45.470284] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:06:58.541 [2024-05-16 20:05:45.470296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.541 #29 NEW cov: 12127 ft: 15181 corp: 27/657b lim: 50 exec/s: 29 rss: 73Mb L: 34/50 MS: 1 ChangeBit- 00:06:58.541 [2024-05-16 20:05:45.520181] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:58.541 [2024-05-16 20:05:45.520205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.541 [2024-05-16 20:05:45.520263] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:58.541 [2024-05-16 20:05:45.520276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.541 #30 NEW cov: 12127 ft: 15233 corp: 28/677b lim: 50 exec/s: 30 rss: 73Mb L: 20/50 MS: 1 EraseBytes- 00:06:58.541 [2024-05-16 20:05:45.570760] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:58.541 [2024-05-16 20:05:45.570785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.541 [2024-05-16 20:05:45.570836] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:58.541 [2024-05-16 20:05:45.570847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.541 [2024-05-16 20:05:45.570898] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:06:58.541 [2024-05-16 20:05:45.570912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.541 [2024-05-16 20:05:45.570964] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:06:58.541 [2024-05-16 20:05:45.570977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:58.541 [2024-05-16 20:05:45.571029] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:06:58.541 [2024-05-16 20:05:45.571042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:58.541 #31 NEW cov: 12127 ft: 15244 corp: 29/727b lim: 50 exec/s: 31 rss: 73Mb L: 50/50 MS: 1 CopyPart- 00:06:58.541 [2024-05-16 20:05:45.620462] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:58.541 [2024-05-16 20:05:45.620486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.541 [2024-05-16 20:05:45.620527] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:58.541 [2024-05-16 20:05:45.620540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.541 #32 NEW cov: 12127 ft: 15249 corp: 30/748b lim: 50 exec/s: 32 rss: 73Mb L: 21/50 MS: 1 CopyPart- 00:06:58.541 [2024-05-16 20:05:45.660587] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:58.541 [2024-05-16 20:05:45.660610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.541 [2024-05-16 20:05:45.660650] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:58.541 [2024-05-16 20:05:45.660661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.541 #33 NEW cov: 12127 ft: 15259 corp: 31/770b lim: 50 exec/s: 33 rss: 73Mb L: 22/50 MS: 1 CrossOver- 00:06:58.800 [2024-05-16 20:05:45.701033] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:58.800 [2024-05-16 20:05:45.701057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.800 [2024-05-16 20:05:45.701115] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:58.800 [2024-05-16 20:05:45.701128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.800 [2024-05-16 20:05:45.701180] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:06:58.800 [2024-05-16 20:05:45.701192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.800 [2024-05-16 20:05:45.701244] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:06:58.800 [2024-05-16 20:05:45.701257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:58.800 #34 NEW cov: 12127 ft: 15272 corp: 32/813b lim: 50 exec/s: 34 rss: 73Mb L: 43/50 MS: 1 EraseBytes- 00:06:58.800 [2024-05-16 20:05:45.740926] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:58.800 [2024-05-16 20:05:45.740951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.800 [2024-05-16 20:05:45.741000] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:58.800 [2024-05-16 20:05:45.741013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.800 [2024-05-16 20:05:45.741064] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:06:58.800 [2024-05-16 20:05:45.741077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.800 #35 NEW cov: 12127 ft: 15290 corp: 33/847b lim: 50 exec/s: 35 rss: 73Mb L: 34/50 MS: 1 CrossOver- 00:06:58.800 [2024-05-16 20:05:45.791079] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:58.800 [2024-05-16 20:05:45.791104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.800 [2024-05-16 20:05:45.791153] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:58.800 [2024-05-16 20:05:45.791164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.800 [2024-05-16 20:05:45.791235] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:06:58.800 [2024-05-16 20:05:45.791248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.800 #36 NEW cov: 12127 ft: 15326 corp: 34/881b lim: 50 exec/s: 36 rss: 73Mb L: 34/50 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\000"- 00:06:58.800 [2024-05-16 20:05:45.831048] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:58.800 [2024-05-16 20:05:45.831074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.800 [2024-05-16 20:05:45.831134] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:58.800 [2024-05-16 20:05:45.831151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.800 #37 NEW cov: 12127 ft: 15338 corp: 35/902b lim: 50 exec/s: 37 rss: 73Mb L: 21/50 MS: 1 ChangeByte- 00:06:58.800 [2024-05-16 20:05:45.871149] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:58.800 [2024-05-16 20:05:45.871174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.800 [2024-05-16 20:05:45.871212] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:58.800 [2024-05-16 20:05:45.871224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.800 #38 NEW cov: 12127 ft: 15343 corp: 36/923b lim: 50 exec/s: 38 rss: 74Mb L: 21/50 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:06:58.801 [2024-05-16 20:05:45.921286] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:58.801 [2024-05-16 20:05:45.921311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.801 [2024-05-16 20:05:45.921346] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:58.801 [2024-05-16 20:05:45.921358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.801 #39 NEW cov: 12127 ft: 15350 corp: 37/950b lim: 50 exec/s: 39 rss: 74Mb L: 27/50 MS: 1 ChangeBit- 00:06:59.060 [2024-05-16 20:05:45.961676] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:59.060 [2024-05-16 20:05:45.961701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.060 [2024-05-16 20:05:45.961752] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:59.060 [2024-05-16 20:05:45.961765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.060 [2024-05-16 20:05:45.961816] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:06:59.060 [2024-05-16 20:05:45.961829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:59.060 [2024-05-16 20:05:45.961882] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:06:59.060 [2024-05-16 20:05:45.961894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:59.060 #40 NEW cov: 12127 ft: 15369 corp: 38/998b lim: 50 exec/s: 40 rss: 74Mb L: 48/50 MS: 1 ShuffleBytes- 00:06:59.060 [2024-05-16 20:05:46.001393] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:59.060 [2024-05-16 20:05:46.001418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.060 #41 NEW cov: 12127 ft: 15425 corp: 39/1009b lim: 50 exec/s: 41 rss: 74Mb L: 11/50 MS: 1 CrossOver- 00:06:59.060 [2024-05-16 20:05:46.041499] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:59.060 [2024-05-16 20:05:46.041526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.060 #42 NEW cov: 12127 ft: 15431 corp: 40/1021b lim: 50 exec/s: 42 rss: 74Mb L: 12/50 MS: 1 CopyPart- 00:06:59.060 [2024-05-16 20:05:46.081905] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:59.060 [2024-05-16 20:05:46.081931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.060 [2024-05-16 20:05:46.081973] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:59.060 [2024-05-16 20:05:46.081985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.060 [2024-05-16 20:05:46.082034] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:06:59.060 [2024-05-16 20:05:46.082048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:59.060 #43 NEW cov: 12127 ft: 15437 corp: 41/1054b lim: 50 exec/s: 43 rss: 74Mb L: 33/50 MS: 1 ChangeBinInt- 00:06:59.060 [2024-05-16 20:05:46.132084] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:59.060 [2024-05-16 20:05:46.132110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.060 [2024-05-16 20:05:46.132175] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:59.060 [2024-05-16 20:05:46.132188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.060 [2024-05-16 20:05:46.132240] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:06:59.060 [2024-05-16 20:05:46.132253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:59.060 #44 NEW cov: 12127 ft: 15439 corp: 42/1087b lim: 50 exec/s: 44 rss: 74Mb L: 33/50 MS: 1 EraseBytes- 00:06:59.060 [2024-05-16 20:05:46.172030] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:59.060 [2024-05-16 20:05:46.172055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.060 [2024-05-16 20:05:46.172089] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:59.060 [2024-05-16 20:05:46.172102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.060 #45 NEW cov: 12127 ft: 15447 corp: 43/1114b lim: 50 exec/s: 45 rss: 74Mb L: 27/50 MS: 1 InsertRepeatedBytes- 00:06:59.320 [2024-05-16 20:05:46.222447] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:59.320 [2024-05-16 20:05:46.222476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.320 [2024-05-16 20:05:46.222526] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:59.320 [2024-05-16 20:05:46.222539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.320 [2024-05-16 20:05:46.222590] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:06:59.320 [2024-05-16 20:05:46.222603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:59.320 [2024-05-16 20:05:46.222653] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:06:59.320 [2024-05-16 20:05:46.222667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:59.320 #46 NEW cov: 12127 ft: 15448 corp: 44/1160b lim: 50 exec/s: 23 rss: 74Mb L: 46/50 MS: 1 CrossOver- 00:06:59.320 #46 DONE cov: 12127 ft: 15448 corp: 44/1160b lim: 50 exec/s: 23 rss: 74Mb 00:06:59.320 ###### Recommended dictionary. ###### 00:06:59.320 "\000\000\000\000\000\000\000\000" # Uses: 1 00:06:59.320 ###### End of recommended dictionary. ###### 00:06:59.320 Done 46 runs in 2 second(s) 00:06:59.320 [2024-05-16 20:05:46.244074] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:59.320 20:05:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_21.conf /var/tmp/suppress_nvmf_fuzz 00:06:59.320 20:05:46 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:59.320 20:05:46 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:59.320 20:05:46 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 22 1 0x1 00:06:59.320 20:05:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=22 00:06:59.320 20:05:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:59.320 20:05:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:59.320 20:05:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:06:59.320 20:05:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_22.conf 00:06:59.320 20:05:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:59.320 20:05:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:59.320 20:05:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 22 00:06:59.320 20:05:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4422 00:06:59.320 20:05:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:06:59.320 20:05:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' 00:06:59.320 20:05:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4422"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:59.320 20:05:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:59.320 20:05:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:59.320 20:05:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' -c /tmp/fuzz_json_22.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 -Z 22 00:06:59.320 [2024-05-16 20:05:46.408014] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:59.320 [2024-05-16 20:05:46.408093] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1672184 ] 00:06:59.320 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.579 [2024-05-16 20:05:46.567072] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.579 [2024-05-16 20:05:46.630665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.579 [2024-05-16 20:05:46.689073] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:59.579 [2024-05-16 20:05:46.705034] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:59.579 [2024-05-16 20:05:46.705402] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4422 *** 00:06:59.579 INFO: Running with entropic power schedule (0xFF, 100). 00:06:59.579 INFO: Seed: 1518069604 00:06:59.838 INFO: Loaded 1 modules (357283 inline 8-bit counters): 357283 [0x299c0cc, 0x29f346f), 00:06:59.838 INFO: Loaded 1 PC tables (357283 PCs): 357283 [0x29f3470,0x2f66ea0), 00:06:59.838 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:06:59.838 INFO: A corpus is not provided, starting from an empty corpus 00:06:59.838 #2 INITED exec/s: 0 rss: 64Mb 00:06:59.838 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:59.838 This may also happen if the target rejected all inputs we tried so far 00:06:59.838 [2024-05-16 20:05:46.763330] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:59.838 [2024-05-16 20:05:46.763358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.838 [2024-05-16 20:05:46.763415] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:59.838 [2024-05-16 20:05:46.763429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.838 [2024-05-16 20:05:46.763487] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:06:59.838 [2024-05-16 20:05:46.763500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:59.838 [2024-05-16 20:05:46.763550] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:06:59.839 [2024-05-16 20:05:46.763563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:59.839 NEW_FUNC[1/687]: 0x4aa4b0 in fuzz_nvm_reservation_register_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:644 00:06:59.839 NEW_FUNC[2/687]: 0x4bf420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:59.839 #4 NEW cov: 11909 ft: 11910 corp: 2/80b lim: 85 exec/s: 0 rss: 71Mb L: 79/79 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:06:59.839 [2024-05-16 20:05:46.913915] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:59.839 [2024-05-16 20:05:46.913968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.839 [2024-05-16 20:05:46.914045] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:59.839 [2024-05-16 20:05:46.914069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.839 [2024-05-16 20:05:46.914143] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:06:59.839 [2024-05-16 20:05:46.914165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:59.839 #7 NEW cov: 12039 ft: 12805 corp: 3/138b lim: 85 exec/s: 0 rss: 71Mb L: 58/79 MS: 3 ChangeBit-ShuffleBytes-InsertRepeatedBytes- 00:06:59.839 [2024-05-16 20:05:46.963857] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:59.839 [2024-05-16 20:05:46.963882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.839 [2024-05-16 20:05:46.963952] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:59.839 [2024-05-16 20:05:46.963966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.839 [2024-05-16 20:05:46.964018] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:06:59.839 [2024-05-16 20:05:46.964031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:59.839 [2024-05-16 20:05:46.964086] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:06:59.839 [2024-05-16 20:05:46.964099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:00.098 #8 NEW cov: 12045 ft: 13023 corp: 4/209b lim: 85 exec/s: 0 rss: 71Mb L: 71/79 MS: 1 EraseBytes- 00:07:00.098 [2024-05-16 20:05:47.013851] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:00.098 [2024-05-16 20:05:47.013876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.098 [2024-05-16 20:05:47.013943] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:00.098 [2024-05-16 20:05:47.013958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.098 [2024-05-16 20:05:47.014010] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:00.098 [2024-05-16 20:05:47.014023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.098 #9 NEW cov: 12130 ft: 13217 corp: 5/267b lim: 85 exec/s: 0 rss: 71Mb L: 58/79 MS: 1 CopyPart- 00:07:00.098 [2024-05-16 20:05:47.063993] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:00.098 [2024-05-16 20:05:47.064018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.098 [2024-05-16 20:05:47.064084] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:00.098 [2024-05-16 20:05:47.064097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.098 [2024-05-16 20:05:47.064150] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:00.098 [2024-05-16 20:05:47.064163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.098 #10 NEW cov: 12130 ft: 13448 corp: 6/325b lim: 85 exec/s: 0 rss: 72Mb L: 58/79 MS: 1 CopyPart- 00:07:00.098 [2024-05-16 20:05:47.114162] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:00.098 [2024-05-16 20:05:47.114186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.098 [2024-05-16 20:05:47.114255] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:00.098 [2024-05-16 20:05:47.114268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.098 [2024-05-16 20:05:47.114320] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:00.098 [2024-05-16 20:05:47.114333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.098 #11 NEW cov: 12130 ft: 13577 corp: 7/383b lim: 85 exec/s: 0 rss: 72Mb L: 58/79 MS: 1 ChangeBinInt- 00:07:00.098 [2024-05-16 20:05:47.154273] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:00.098 [2024-05-16 20:05:47.154297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.098 [2024-05-16 20:05:47.154349] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:00.098 [2024-05-16 20:05:47.154362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.098 [2024-05-16 20:05:47.154418] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:00.098 [2024-05-16 20:05:47.154431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.098 #12 NEW cov: 12130 ft: 13654 corp: 8/441b lim: 85 exec/s: 0 rss: 72Mb L: 58/79 MS: 1 CrossOver- 00:07:00.098 [2024-05-16 20:05:47.194404] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:00.098 [2024-05-16 20:05:47.194430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.098 [2024-05-16 20:05:47.194492] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:00.098 [2024-05-16 20:05:47.194507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.098 [2024-05-16 20:05:47.194561] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:00.098 [2024-05-16 20:05:47.194573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.098 #13 NEW cov: 12130 ft: 13675 corp: 9/499b lim: 85 exec/s: 0 rss: 72Mb L: 58/79 MS: 1 CopyPart- 00:07:00.098 [2024-05-16 20:05:47.234513] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:00.098 [2024-05-16 20:05:47.234541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.098 [2024-05-16 20:05:47.234589] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:00.098 [2024-05-16 20:05:47.234602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.098 [2024-05-16 20:05:47.234656] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:00.098 [2024-05-16 20:05:47.234668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.358 #14 NEW cov: 12130 ft: 13787 corp: 10/557b lim: 85 exec/s: 0 rss: 72Mb L: 58/79 MS: 1 CMP- DE: "\001\000"- 00:07:00.358 [2024-05-16 20:05:47.284643] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:00.358 [2024-05-16 20:05:47.284669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.358 [2024-05-16 20:05:47.284719] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:00.358 [2024-05-16 20:05:47.284730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.358 [2024-05-16 20:05:47.284782] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:00.358 [2024-05-16 20:05:47.284794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.358 #15 NEW cov: 12130 ft: 13876 corp: 11/615b lim: 85 exec/s: 0 rss: 72Mb L: 58/79 MS: 1 ChangeBinInt- 00:07:00.358 [2024-05-16 20:05:47.324810] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:00.358 [2024-05-16 20:05:47.324836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.358 [2024-05-16 20:05:47.324903] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:00.358 [2024-05-16 20:05:47.324915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.358 [2024-05-16 20:05:47.324969] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:00.358 [2024-05-16 20:05:47.324981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.358 #16 NEW cov: 12130 ft: 13914 corp: 12/673b lim: 85 exec/s: 0 rss: 72Mb L: 58/79 MS: 1 ChangeBinInt- 00:07:00.358 [2024-05-16 20:05:47.365016] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:00.358 [2024-05-16 20:05:47.365042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.358 [2024-05-16 20:05:47.365096] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:00.358 [2024-05-16 20:05:47.365108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.358 [2024-05-16 20:05:47.365159] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:00.358 [2024-05-16 20:05:47.365175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.358 [2024-05-16 20:05:47.365226] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:00.358 [2024-05-16 20:05:47.365238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:00.358 #17 NEW cov: 12130 ft: 13974 corp: 13/744b lim: 85 exec/s: 0 rss: 72Mb L: 71/79 MS: 1 CrossOver- 00:07:00.358 [2024-05-16 20:05:47.415057] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:00.358 [2024-05-16 20:05:47.415084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.358 [2024-05-16 20:05:47.415126] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:00.358 [2024-05-16 20:05:47.415138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.358 [2024-05-16 20:05:47.415189] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:00.358 [2024-05-16 20:05:47.415202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.358 #18 NEW cov: 12130 ft: 13982 corp: 14/802b lim: 85 exec/s: 0 rss: 72Mb L: 58/79 MS: 1 ShuffleBytes- 00:07:00.358 [2024-05-16 20:05:47.465190] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:00.358 [2024-05-16 20:05:47.465216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.358 [2024-05-16 20:05:47.465270] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:00.358 [2024-05-16 20:05:47.465284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.358 [2024-05-16 20:05:47.465336] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:00.358 [2024-05-16 20:05:47.465349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.358 #19 NEW cov: 12130 ft: 14002 corp: 15/860b lim: 85 exec/s: 0 rss: 72Mb L: 58/79 MS: 1 CrossOver- 00:07:00.619 [2024-05-16 20:05:47.515544] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:00.619 [2024-05-16 20:05:47.515570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.619 [2024-05-16 20:05:47.515622] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:00.619 [2024-05-16 20:05:47.515635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.619 [2024-05-16 20:05:47.515703] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:00.619 [2024-05-16 20:05:47.515716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.619 [2024-05-16 20:05:47.515770] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:00.619 [2024-05-16 20:05:47.515784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:00.619 #20 NEW cov: 12130 ft: 14073 corp: 16/930b lim: 85 exec/s: 0 rss: 72Mb L: 70/79 MS: 1 EraseBytes- 00:07:00.619 [2024-05-16 20:05:47.565467] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:00.619 [2024-05-16 20:05:47.565493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.619 [2024-05-16 20:05:47.565556] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:00.619 [2024-05-16 20:05:47.565569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.619 [2024-05-16 20:05:47.565622] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:00.619 [2024-05-16 20:05:47.565635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.619 #21 NEW cov: 12130 ft: 14096 corp: 17/992b lim: 85 exec/s: 0 rss: 72Mb L: 62/79 MS: 1 CopyPart- 00:07:00.619 [2024-05-16 20:05:47.605777] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:00.619 [2024-05-16 20:05:47.605801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.619 [2024-05-16 20:05:47.605855] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:00.619 [2024-05-16 20:05:47.605868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.619 [2024-05-16 20:05:47.605920] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:00.619 [2024-05-16 20:05:47.605933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.619 [2024-05-16 20:05:47.605986] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:00.619 [2024-05-16 20:05:47.605999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:00.619 NEW_FUNC[1/1]: 0x1a6ef60 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:00.619 #22 NEW cov: 12153 ft: 14113 corp: 18/1069b lim: 85 exec/s: 0 rss: 72Mb L: 77/79 MS: 1 CrossOver- 00:07:00.619 [2024-05-16 20:05:47.655782] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:00.619 [2024-05-16 20:05:47.655807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.619 [2024-05-16 20:05:47.655856] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:00.619 [2024-05-16 20:05:47.655868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.619 [2024-05-16 20:05:47.655920] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:00.619 [2024-05-16 20:05:47.655933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.619 #23 NEW cov: 12153 ft: 14137 corp: 19/1128b lim: 85 exec/s: 0 rss: 73Mb L: 59/79 MS: 1 CopyPart- 00:07:00.619 [2024-05-16 20:05:47.706226] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:00.619 [2024-05-16 20:05:47.706250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.619 [2024-05-16 20:05:47.706318] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:00.619 [2024-05-16 20:05:47.706329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.619 [2024-05-16 20:05:47.706382] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:00.619 [2024-05-16 20:05:47.706395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.619 [2024-05-16 20:05:47.706450] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:00.620 [2024-05-16 20:05:47.706469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:00.620 [2024-05-16 20:05:47.706523] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:4 nsid:0 00:07:00.620 [2024-05-16 20:05:47.706536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:00.620 #24 NEW cov: 12153 ft: 14189 corp: 20/1213b lim: 85 exec/s: 0 rss: 73Mb L: 85/85 MS: 1 InsertRepeatedBytes- 00:07:00.620 [2024-05-16 20:05:47.756219] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:00.620 [2024-05-16 20:05:47.756244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.620 [2024-05-16 20:05:47.756295] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:00.620 [2024-05-16 20:05:47.756307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.620 [2024-05-16 20:05:47.756357] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:00.620 [2024-05-16 20:05:47.756370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.620 [2024-05-16 20:05:47.756422] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:00.620 [2024-05-16 20:05:47.756435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:00.879 #25 NEW cov: 12153 ft: 14199 corp: 21/1295b lim: 85 exec/s: 25 rss: 73Mb L: 82/85 MS: 1 CrossOver- 00:07:00.879 [2024-05-16 20:05:47.796164] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:00.879 [2024-05-16 20:05:47.796189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.879 [2024-05-16 20:05:47.796258] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:00.879 [2024-05-16 20:05:47.796271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.879 [2024-05-16 20:05:47.796324] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:00.879 [2024-05-16 20:05:47.796337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.879 #26 NEW cov: 12153 ft: 14213 corp: 22/1353b lim: 85 exec/s: 26 rss: 73Mb L: 58/85 MS: 1 ChangeByte- 00:07:00.879 [2024-05-16 20:05:47.836438] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:00.879 [2024-05-16 20:05:47.836468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.880 [2024-05-16 20:05:47.836538] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:00.880 [2024-05-16 20:05:47.836551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.880 [2024-05-16 20:05:47.836604] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:00.880 [2024-05-16 20:05:47.836616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.880 [2024-05-16 20:05:47.836669] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:00.880 [2024-05-16 20:05:47.836681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:00.880 #27 NEW cov: 12153 ft: 14232 corp: 23/1434b lim: 85 exec/s: 27 rss: 73Mb L: 81/85 MS: 1 PersAutoDict- DE: "\001\000"- 00:07:00.880 [2024-05-16 20:05:47.876400] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:00.880 [2024-05-16 20:05:47.876425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.880 [2024-05-16 20:05:47.876482] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:00.880 [2024-05-16 20:05:47.876495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.880 [2024-05-16 20:05:47.876548] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:00.880 [2024-05-16 20:05:47.876560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.880 #28 NEW cov: 12153 ft: 14238 corp: 24/1494b lim: 85 exec/s: 28 rss: 73Mb L: 60/85 MS: 1 PersAutoDict- DE: "\001\000"- 00:07:00.880 [2024-05-16 20:05:47.926559] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:00.880 [2024-05-16 20:05:47.926584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.880 [2024-05-16 20:05:47.926653] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:00.880 [2024-05-16 20:05:47.926667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.880 [2024-05-16 20:05:47.926720] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:00.880 [2024-05-16 20:05:47.926733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.880 #29 NEW cov: 12153 ft: 14247 corp: 25/1554b lim: 85 exec/s: 29 rss: 73Mb L: 60/85 MS: 1 InsertByte- 00:07:00.880 [2024-05-16 20:05:47.976733] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:00.880 [2024-05-16 20:05:47.976758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.880 [2024-05-16 20:05:47.976827] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:00.880 [2024-05-16 20:05:47.976840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.880 [2024-05-16 20:05:47.976894] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:00.880 [2024-05-16 20:05:47.976908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.139 [2024-05-16 20:05:48.026855] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:01.139 [2024-05-16 20:05:48.026880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.139 [2024-05-16 20:05:48.026933] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:01.139 [2024-05-16 20:05:48.026945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.139 [2024-05-16 20:05:48.026999] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:01.139 [2024-05-16 20:05:48.027012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.140 #31 NEW cov: 12153 ft: 14255 corp: 26/1616b lim: 85 exec/s: 31 rss: 73Mb L: 62/85 MS: 2 CopyPart-ShuffleBytes- 00:07:01.140 [2024-05-16 20:05:48.067127] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:01.140 [2024-05-16 20:05:48.067151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.140 [2024-05-16 20:05:48.067220] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:01.140 [2024-05-16 20:05:48.067233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.140 [2024-05-16 20:05:48.067287] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:01.140 [2024-05-16 20:05:48.067299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.140 [2024-05-16 20:05:48.067352] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:01.140 [2024-05-16 20:05:48.067365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:01.140 #32 NEW cov: 12153 ft: 14324 corp: 27/1697b lim: 85 exec/s: 32 rss: 73Mb L: 81/85 MS: 1 InsertRepeatedBytes- 00:07:01.140 [2024-05-16 20:05:48.107209] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:01.140 [2024-05-16 20:05:48.107234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.140 [2024-05-16 20:05:48.107304] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:01.140 [2024-05-16 20:05:48.107317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.140 [2024-05-16 20:05:48.107368] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:01.140 [2024-05-16 20:05:48.107380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.140 [2024-05-16 20:05:48.107433] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:01.140 [2024-05-16 20:05:48.107445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:01.140 #33 NEW cov: 12153 ft: 14359 corp: 28/1779b lim: 85 exec/s: 33 rss: 73Mb L: 82/85 MS: 1 ChangeBit- 00:07:01.140 [2024-05-16 20:05:48.157392] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:01.140 [2024-05-16 20:05:48.157416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.140 [2024-05-16 20:05:48.157494] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:01.140 [2024-05-16 20:05:48.157508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.140 [2024-05-16 20:05:48.157558] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:01.140 [2024-05-16 20:05:48.157572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.140 [2024-05-16 20:05:48.157625] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:01.140 [2024-05-16 20:05:48.157638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:01.140 #34 NEW cov: 12153 ft: 14374 corp: 29/1862b lim: 85 exec/s: 34 rss: 73Mb L: 83/85 MS: 1 InsertByte- 00:07:01.140 [2024-05-16 20:05:48.197358] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:01.140 [2024-05-16 20:05:48.197383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.140 [2024-05-16 20:05:48.197440] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:01.140 [2024-05-16 20:05:48.197461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.140 [2024-05-16 20:05:48.197515] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:01.140 [2024-05-16 20:05:48.197529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.140 #37 NEW cov: 12153 ft: 14405 corp: 30/1925b lim: 85 exec/s: 37 rss: 73Mb L: 63/85 MS: 3 CopyPart-InsertByte-CrossOver- 00:07:01.140 [2024-05-16 20:05:48.237689] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:01.140 [2024-05-16 20:05:48.237714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.140 [2024-05-16 20:05:48.237783] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:01.140 [2024-05-16 20:05:48.237796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.140 [2024-05-16 20:05:48.237848] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:01.140 [2024-05-16 20:05:48.237861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.140 [2024-05-16 20:05:48.237914] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:01.140 [2024-05-16 20:05:48.237927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:01.140 #38 NEW cov: 12153 ft: 14412 corp: 31/2002b lim: 85 exec/s: 38 rss: 73Mb L: 77/85 MS: 1 PersAutoDict- DE: "\001\000"- 00:07:01.140 [2024-05-16 20:05:48.277596] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:01.140 [2024-05-16 20:05:48.277622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.140 [2024-05-16 20:05:48.277672] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:01.140 [2024-05-16 20:05:48.277686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.140 [2024-05-16 20:05:48.277739] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:01.140 [2024-05-16 20:05:48.277753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.400 #39 NEW cov: 12153 ft: 14435 corp: 32/2061b lim: 85 exec/s: 39 rss: 74Mb L: 59/85 MS: 1 InsertByte- 00:07:01.400 [2024-05-16 20:05:48.317902] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:01.400 [2024-05-16 20:05:48.317927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.400 [2024-05-16 20:05:48.317980] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:01.400 [2024-05-16 20:05:48.317994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.400 [2024-05-16 20:05:48.318045] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:01.400 [2024-05-16 20:05:48.318059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.400 [2024-05-16 20:05:48.318114] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:01.400 [2024-05-16 20:05:48.318129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:01.400 #40 NEW cov: 12153 ft: 14455 corp: 33/2143b lim: 85 exec/s: 40 rss: 74Mb L: 82/85 MS: 1 ChangeBit- 00:07:01.400 [2024-05-16 20:05:48.367810] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:01.400 [2024-05-16 20:05:48.367835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.400 [2024-05-16 20:05:48.367883] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:01.400 [2024-05-16 20:05:48.367894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.400 [2024-05-16 20:05:48.367948] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:01.400 [2024-05-16 20:05:48.367961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.400 #41 NEW cov: 12153 ft: 14462 corp: 34/2203b lim: 85 exec/s: 41 rss: 74Mb L: 60/85 MS: 1 ChangeByte- 00:07:01.400 [2024-05-16 20:05:48.417960] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:01.400 [2024-05-16 20:05:48.417985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.400 [2024-05-16 20:05:48.418053] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:01.400 [2024-05-16 20:05:48.418067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.400 [2024-05-16 20:05:48.418121] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:01.400 [2024-05-16 20:05:48.418133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.400 #42 NEW cov: 12153 ft: 14476 corp: 35/2261b lim: 85 exec/s: 42 rss: 74Mb L: 58/85 MS: 1 ChangeBinInt- 00:07:01.400 [2024-05-16 20:05:48.458103] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:01.400 [2024-05-16 20:05:48.458130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.400 [2024-05-16 20:05:48.458191] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:01.400 [2024-05-16 20:05:48.458204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.400 [2024-05-16 20:05:48.458260] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:01.400 [2024-05-16 20:05:48.458272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.400 #43 NEW cov: 12153 ft: 14486 corp: 36/2316b lim: 85 exec/s: 43 rss: 74Mb L: 55/85 MS: 1 EraseBytes- 00:07:01.400 [2024-05-16 20:05:48.508564] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:01.400 [2024-05-16 20:05:48.508589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.400 [2024-05-16 20:05:48.508644] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:01.400 [2024-05-16 20:05:48.508656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.400 [2024-05-16 20:05:48.508711] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:01.400 [2024-05-16 20:05:48.508723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.400 [2024-05-16 20:05:48.508778] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:01.400 [2024-05-16 20:05:48.508791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:01.400 [2024-05-16 20:05:48.508844] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:4 nsid:0 00:07:01.400 [2024-05-16 20:05:48.508857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:01.400 #44 NEW cov: 12153 ft: 14487 corp: 37/2401b lim: 85 exec/s: 44 rss: 74Mb L: 85/85 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\377"- 00:07:01.660 [2024-05-16 20:05:48.558524] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:01.660 [2024-05-16 20:05:48.558549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.660 [2024-05-16 20:05:48.558621] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:01.660 [2024-05-16 20:05:48.558634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.660 [2024-05-16 20:05:48.558684] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:01.660 [2024-05-16 20:05:48.558697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.661 [2024-05-16 20:05:48.558751] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:01.661 [2024-05-16 20:05:48.558763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:01.661 #45 NEW cov: 12153 ft: 14502 corp: 38/2483b lim: 85 exec/s: 45 rss: 74Mb L: 82/85 MS: 1 ChangeByte- 00:07:01.661 [2024-05-16 20:05:48.608694] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:01.661 [2024-05-16 20:05:48.608720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.661 [2024-05-16 20:05:48.608774] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:01.661 [2024-05-16 20:05:48.608786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.661 [2024-05-16 20:05:48.608856] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:01.661 [2024-05-16 20:05:48.608869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.661 [2024-05-16 20:05:48.608923] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:01.661 [2024-05-16 20:05:48.608936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:01.661 #46 NEW cov: 12153 ft: 14513 corp: 39/2562b lim: 85 exec/s: 46 rss: 74Mb L: 79/85 MS: 1 EraseBytes- 00:07:01.661 [2024-05-16 20:05:48.648631] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:01.661 [2024-05-16 20:05:48.648657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.661 [2024-05-16 20:05:48.648722] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:01.661 [2024-05-16 20:05:48.648735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.661 [2024-05-16 20:05:48.648789] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:01.661 [2024-05-16 20:05:48.648803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.661 #47 NEW cov: 12153 ft: 14538 corp: 40/2620b lim: 85 exec/s: 47 rss: 74Mb L: 58/85 MS: 1 ChangeBit- 00:07:01.661 [2024-05-16 20:05:48.688750] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:01.661 [2024-05-16 20:05:48.688775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.661 [2024-05-16 20:05:48.688825] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:01.661 [2024-05-16 20:05:48.688836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.661 [2024-05-16 20:05:48.688890] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:01.661 [2024-05-16 20:05:48.688903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.661 #48 NEW cov: 12153 ft: 14553 corp: 41/2680b lim: 85 exec/s: 48 rss: 74Mb L: 60/85 MS: 1 CrossOver- 00:07:01.661 [2024-05-16 20:05:48.739035] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:01.661 [2024-05-16 20:05:48.739060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.661 [2024-05-16 20:05:48.739129] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:01.661 [2024-05-16 20:05:48.739141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.661 [2024-05-16 20:05:48.739193] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:01.661 [2024-05-16 20:05:48.739206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.661 [2024-05-16 20:05:48.739259] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:01.661 [2024-05-16 20:05:48.739271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:01.661 #54 NEW cov: 12153 ft: 14575 corp: 42/2763b lim: 85 exec/s: 27 rss: 74Mb L: 83/85 MS: 1 CrossOver- 00:07:01.661 #54 DONE cov: 12153 ft: 14575 corp: 42/2763b lim: 85 exec/s: 27 rss: 74Mb 00:07:01.661 ###### Recommended dictionary. ###### 00:07:01.661 "\001\000" # Uses: 3 00:07:01.661 "\377\377\377\377\377\377\377\377" # Uses: 0 00:07:01.661 ###### End of recommended dictionary. ###### 00:07:01.661 Done 54 runs in 2 second(s) 00:07:01.661 [2024-05-16 20:05:48.773909] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:01.920 20:05:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_22.conf /var/tmp/suppress_nvmf_fuzz 00:07:01.920 20:05:48 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:01.920 20:05:48 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:01.920 20:05:48 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 23 1 0x1 00:07:01.920 20:05:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=23 00:07:01.920 20:05:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:01.920 20:05:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:01.920 20:05:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:07:01.920 20:05:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_23.conf 00:07:01.920 20:05:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:01.920 20:05:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:01.920 20:05:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 23 00:07:01.920 20:05:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4423 00:07:01.920 20:05:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:07:01.920 20:05:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' 00:07:01.920 20:05:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4423"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:01.920 20:05:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:01.920 20:05:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:01.920 20:05:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' -c /tmp/fuzz_json_23.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 -Z 23 00:07:01.920 [2024-05-16 20:05:48.936286] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:07:01.920 [2024-05-16 20:05:48.936363] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1672599 ] 00:07:01.920 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.180 [2024-05-16 20:05:49.100133] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.180 [2024-05-16 20:05:49.164289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.180 [2024-05-16 20:05:49.222752] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:02.181 [2024-05-16 20:05:49.238718] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:02.181 [2024-05-16 20:05:49.239059] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4423 *** 00:07:02.181 INFO: Running with entropic power schedule (0xFF, 100). 00:07:02.181 INFO: Seed: 4054067215 00:07:02.181 INFO: Loaded 1 modules (357283 inline 8-bit counters): 357283 [0x299c0cc, 0x29f346f), 00:07:02.181 INFO: Loaded 1 PC tables (357283 PCs): 357283 [0x29f3470,0x2f66ea0), 00:07:02.181 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:07:02.181 INFO: A corpus is not provided, starting from an empty corpus 00:07:02.181 #2 INITED exec/s: 0 rss: 63Mb 00:07:02.181 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:02.181 This may also happen if the target rejected all inputs we tried so far 00:07:02.181 [2024-05-16 20:05:49.284104] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:02.181 [2024-05-16 20:05:49.284133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.441 NEW_FUNC[1/686]: 0x4ad6e0 in fuzz_nvm_reservation_report_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:671 00:07:02.441 NEW_FUNC[2/686]: 0x4bf420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:02.441 #7 NEW cov: 11842 ft: 11829 corp: 2/9b lim: 25 exec/s: 0 rss: 71Mb L: 8/8 MS: 5 InsertByte-InsertByte-InsertByte-ChangeBit-CopyPart- 00:07:02.441 [2024-05-16 20:05:49.434468] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:02.441 [2024-05-16 20:05:49.434501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.441 #8 NEW cov: 11972 ft: 12266 corp: 3/16b lim: 25 exec/s: 0 rss: 72Mb L: 7/8 MS: 1 EraseBytes- 00:07:02.441 [2024-05-16 20:05:49.484908] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:02.441 [2024-05-16 20:05:49.484936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.441 [2024-05-16 20:05:49.484981] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:02.441 [2024-05-16 20:05:49.484994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.441 [2024-05-16 20:05:49.485043] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:02.441 [2024-05-16 20:05:49.485055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:02.441 [2024-05-16 20:05:49.485109] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:02.441 [2024-05-16 20:05:49.485122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:02.441 #13 NEW cov: 11978 ft: 13218 corp: 4/37b lim: 25 exec/s: 0 rss: 72Mb L: 21/21 MS: 5 CrossOver-InsertByte-InsertByte-CrossOver-InsertRepeatedBytes- 00:07:02.441 [2024-05-16 20:05:49.524638] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:02.441 [2024-05-16 20:05:49.524663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.441 #14 NEW cov: 12063 ft: 13578 corp: 5/45b lim: 25 exec/s: 0 rss: 72Mb L: 8/21 MS: 1 ChangeBit- 00:07:02.441 [2024-05-16 20:05:49.565021] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:02.441 [2024-05-16 20:05:49.565046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.441 [2024-05-16 20:05:49.565097] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:02.441 [2024-05-16 20:05:49.565109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.441 [2024-05-16 20:05:49.565159] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:02.441 [2024-05-16 20:05:49.565171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:02.441 #16 NEW cov: 12063 ft: 13859 corp: 6/62b lim: 25 exec/s: 0 rss: 72Mb L: 17/21 MS: 2 ChangeBit-InsertRepeatedBytes- 00:07:02.700 [2024-05-16 20:05:49.604883] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:02.700 [2024-05-16 20:05:49.604908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.700 #17 NEW cov: 12063 ft: 13914 corp: 7/70b lim: 25 exec/s: 0 rss: 72Mb L: 8/21 MS: 1 ChangeBit- 00:07:02.700 [2024-05-16 20:05:49.655035] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:02.700 [2024-05-16 20:05:49.655059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.700 #20 NEW cov: 12063 ft: 13974 corp: 8/75b lim: 25 exec/s: 0 rss: 72Mb L: 5/21 MS: 3 ChangeByte-ChangeBit-CrossOver- 00:07:02.700 [2024-05-16 20:05:49.695430] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:02.700 [2024-05-16 20:05:49.695459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.700 [2024-05-16 20:05:49.695508] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:02.700 [2024-05-16 20:05:49.695522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.700 [2024-05-16 20:05:49.695572] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:02.700 [2024-05-16 20:05:49.695585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:02.700 #21 NEW cov: 12063 ft: 14091 corp: 9/93b lim: 25 exec/s: 0 rss: 72Mb L: 18/21 MS: 1 InsertByte- 00:07:02.700 [2024-05-16 20:05:49.745660] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:02.700 [2024-05-16 20:05:49.745684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.700 [2024-05-16 20:05:49.745752] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:02.700 [2024-05-16 20:05:49.745765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.701 [2024-05-16 20:05:49.745817] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:02.701 [2024-05-16 20:05:49.745829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:02.701 [2024-05-16 20:05:49.745882] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:02.701 [2024-05-16 20:05:49.745894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:02.701 #22 NEW cov: 12063 ft: 14123 corp: 10/117b lim: 25 exec/s: 0 rss: 72Mb L: 24/24 MS: 1 CrossOver- 00:07:02.701 [2024-05-16 20:05:49.795414] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:02.701 [2024-05-16 20:05:49.795438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.701 #23 NEW cov: 12063 ft: 14183 corp: 11/125b lim: 25 exec/s: 0 rss: 72Mb L: 8/24 MS: 1 ChangeBinInt- 00:07:02.701 [2024-05-16 20:05:49.845716] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:02.701 [2024-05-16 20:05:49.845742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.701 [2024-05-16 20:05:49.845779] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:02.701 [2024-05-16 20:05:49.845792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.959 #24 NEW cov: 12063 ft: 14392 corp: 12/137b lim: 25 exec/s: 0 rss: 72Mb L: 12/24 MS: 1 EraseBytes- 00:07:02.959 [2024-05-16 20:05:49.896086] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:02.959 [2024-05-16 20:05:49.896109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.959 [2024-05-16 20:05:49.896162] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:02.959 [2024-05-16 20:05:49.896174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.959 [2024-05-16 20:05:49.896224] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:02.959 [2024-05-16 20:05:49.896237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:02.959 [2024-05-16 20:05:49.896288] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:02.959 [2024-05-16 20:05:49.896300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:02.959 #25 NEW cov: 12063 ft: 14412 corp: 13/161b lim: 25 exec/s: 0 rss: 72Mb L: 24/24 MS: 1 ChangeByte- 00:07:02.959 [2024-05-16 20:05:49.936056] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:02.959 [2024-05-16 20:05:49.936079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.959 [2024-05-16 20:05:49.936144] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:02.959 [2024-05-16 20:05:49.936157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.960 [2024-05-16 20:05:49.936208] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:02.960 [2024-05-16 20:05:49.936220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:02.960 #26 NEW cov: 12063 ft: 14418 corp: 14/178b lim: 25 exec/s: 0 rss: 72Mb L: 17/24 MS: 1 CMP- DE: "\027\001\000\000"- 00:07:02.960 [2024-05-16 20:05:49.975961] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:02.960 [2024-05-16 20:05:49.975986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.960 #27 NEW cov: 12063 ft: 14499 corp: 15/185b lim: 25 exec/s: 0 rss: 72Mb L: 7/24 MS: 1 PersAutoDict- DE: "\027\001\000\000"- 00:07:02.960 [2024-05-16 20:05:50.026529] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:02.960 [2024-05-16 20:05:50.026558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.960 [2024-05-16 20:05:50.026623] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:02.960 [2024-05-16 20:05:50.026639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.960 [2024-05-16 20:05:50.026693] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:02.960 [2024-05-16 20:05:50.026708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:02.960 [2024-05-16 20:05:50.026765] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:02.960 [2024-05-16 20:05:50.026780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:02.960 #28 NEW cov: 12063 ft: 14545 corp: 16/208b lim: 25 exec/s: 0 rss: 72Mb L: 23/24 MS: 1 InsertRepeatedBytes- 00:07:02.960 [2024-05-16 20:05:50.066218] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:02.960 [2024-05-16 20:05:50.066248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.960 #29 NEW cov: 12063 ft: 14581 corp: 17/215b lim: 25 exec/s: 0 rss: 72Mb L: 7/24 MS: 1 ShuffleBytes- 00:07:03.219 [2024-05-16 20:05:50.116620] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:03.219 [2024-05-16 20:05:50.116649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.219 [2024-05-16 20:05:50.116701] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:03.219 [2024-05-16 20:05:50.116715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.219 [2024-05-16 20:05:50.116766] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:03.219 [2024-05-16 20:05:50.116779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:03.219 #30 NEW cov: 12063 ft: 14611 corp: 18/234b lim: 25 exec/s: 0 rss: 72Mb L: 19/24 MS: 1 CopyPart- 00:07:03.219 [2024-05-16 20:05:50.166857] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:03.219 [2024-05-16 20:05:50.166882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.219 [2024-05-16 20:05:50.166941] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:03.219 [2024-05-16 20:05:50.166955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.219 [2024-05-16 20:05:50.167007] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:03.219 [2024-05-16 20:05:50.167019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:03.219 [2024-05-16 20:05:50.167071] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:03.219 [2024-05-16 20:05:50.167085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:03.219 NEW_FUNC[1/1]: 0x1a6ef60 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:03.219 #31 NEW cov: 12086 ft: 14684 corp: 19/258b lim: 25 exec/s: 0 rss: 73Mb L: 24/24 MS: 1 ChangeBit- 00:07:03.219 [2024-05-16 20:05:50.216965] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:03.219 [2024-05-16 20:05:50.216990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.219 [2024-05-16 20:05:50.217057] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:03.219 [2024-05-16 20:05:50.217071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.219 [2024-05-16 20:05:50.217121] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:03.219 [2024-05-16 20:05:50.217133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:03.219 [2024-05-16 20:05:50.217185] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:03.219 [2024-05-16 20:05:50.217198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:03.219 #32 NEW cov: 12086 ft: 14764 corp: 20/279b lim: 25 exec/s: 0 rss: 73Mb L: 21/24 MS: 1 CrossOver- 00:07:03.219 [2024-05-16 20:05:50.257126] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:03.219 [2024-05-16 20:05:50.257150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.219 [2024-05-16 20:05:50.257216] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:03.219 [2024-05-16 20:05:50.257227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.219 [2024-05-16 20:05:50.257277] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:03.219 [2024-05-16 20:05:50.257289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:03.219 [2024-05-16 20:05:50.257338] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:03.219 [2024-05-16 20:05:50.257351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:03.219 #33 NEW cov: 12086 ft: 14778 corp: 21/302b lim: 25 exec/s: 33 rss: 73Mb L: 23/24 MS: 1 CopyPart- 00:07:03.219 [2024-05-16 20:05:50.307158] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:03.219 [2024-05-16 20:05:50.307184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.219 [2024-05-16 20:05:50.307236] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:03.219 [2024-05-16 20:05:50.307252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.219 [2024-05-16 20:05:50.307303] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:03.219 [2024-05-16 20:05:50.307315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:03.219 #35 NEW cov: 12086 ft: 14817 corp: 22/317b lim: 25 exec/s: 35 rss: 73Mb L: 15/24 MS: 2 CrossOver-InsertRepeatedBytes- 00:07:03.219 [2024-05-16 20:05:50.347303] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:03.219 [2024-05-16 20:05:50.347328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.219 [2024-05-16 20:05:50.347392] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:03.219 [2024-05-16 20:05:50.347406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.219 [2024-05-16 20:05:50.347472] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:03.219 [2024-05-16 20:05:50.347485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:03.479 #36 NEW cov: 12086 ft: 14840 corp: 23/335b lim: 25 exec/s: 36 rss: 73Mb L: 18/24 MS: 1 ChangeBit- 00:07:03.479 [2024-05-16 20:05:50.387513] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:03.479 [2024-05-16 20:05:50.387537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.479 [2024-05-16 20:05:50.387605] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:03.479 [2024-05-16 20:05:50.387618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.479 [2024-05-16 20:05:50.387668] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:03.479 [2024-05-16 20:05:50.387681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:03.479 [2024-05-16 20:05:50.387733] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:03.479 [2024-05-16 20:05:50.387746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:03.479 #37 NEW cov: 12086 ft: 14853 corp: 24/355b lim: 25 exec/s: 37 rss: 73Mb L: 20/24 MS: 1 CMP- DE: "\200~pBm\245\006\000"- 00:07:03.479 [2024-05-16 20:05:50.437349] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:03.479 [2024-05-16 20:05:50.437375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.479 #38 NEW cov: 12086 ft: 14862 corp: 25/362b lim: 25 exec/s: 38 rss: 73Mb L: 7/24 MS: 1 ShuffleBytes- 00:07:03.479 [2024-05-16 20:05:50.477745] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:03.479 [2024-05-16 20:05:50.477769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.479 [2024-05-16 20:05:50.477837] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:03.479 [2024-05-16 20:05:50.477850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.479 [2024-05-16 20:05:50.477900] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:03.479 [2024-05-16 20:05:50.477913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:03.479 [2024-05-16 20:05:50.477968] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:03.479 [2024-05-16 20:05:50.477980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:03.479 #39 NEW cov: 12086 ft: 14925 corp: 26/386b lim: 25 exec/s: 39 rss: 73Mb L: 24/24 MS: 1 ChangeASCIIInt- 00:07:03.479 [2024-05-16 20:05:50.527953] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:03.479 [2024-05-16 20:05:50.527977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.479 [2024-05-16 20:05:50.528044] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:03.479 [2024-05-16 20:05:50.528055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.479 [2024-05-16 20:05:50.528105] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:03.479 [2024-05-16 20:05:50.528118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:03.479 [2024-05-16 20:05:50.528169] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:03.479 [2024-05-16 20:05:50.528181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:03.479 #40 NEW cov: 12086 ft: 14981 corp: 27/409b lim: 25 exec/s: 40 rss: 73Mb L: 23/24 MS: 1 ChangeBit- 00:07:03.479 [2024-05-16 20:05:50.567928] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:03.479 [2024-05-16 20:05:50.567952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.479 [2024-05-16 20:05:50.568020] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:03.479 [2024-05-16 20:05:50.568033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.479 [2024-05-16 20:05:50.568084] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:03.479 [2024-05-16 20:05:50.568097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:03.479 #41 NEW cov: 12086 ft: 14988 corp: 28/424b lim: 25 exec/s: 41 rss: 73Mb L: 15/24 MS: 1 ChangeBinInt- 00:07:03.479 [2024-05-16 20:05:50.618172] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:03.479 [2024-05-16 20:05:50.618196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.479 [2024-05-16 20:05:50.618262] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:03.479 [2024-05-16 20:05:50.618274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.479 [2024-05-16 20:05:50.618326] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:03.479 [2024-05-16 20:05:50.618339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:03.479 [2024-05-16 20:05:50.618390] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:03.479 [2024-05-16 20:05:50.618402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:03.739 #42 NEW cov: 12086 ft: 15011 corp: 29/447b lim: 25 exec/s: 42 rss: 73Mb L: 23/24 MS: 1 ShuffleBytes- 00:07:03.739 [2024-05-16 20:05:50.668213] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:03.739 [2024-05-16 20:05:50.668240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.739 [2024-05-16 20:05:50.668299] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:03.739 [2024-05-16 20:05:50.668313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.739 [2024-05-16 20:05:50.668364] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:03.739 [2024-05-16 20:05:50.668377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:03.739 #43 NEW cov: 12086 ft: 15064 corp: 30/465b lim: 25 exec/s: 43 rss: 74Mb L: 18/24 MS: 1 ShuffleBytes- 00:07:03.739 [2024-05-16 20:05:50.718492] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:03.739 [2024-05-16 20:05:50.718515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.739 [2024-05-16 20:05:50.718583] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:03.739 [2024-05-16 20:05:50.718596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.739 [2024-05-16 20:05:50.718649] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:03.739 [2024-05-16 20:05:50.718662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:03.739 [2024-05-16 20:05:50.718714] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:03.739 [2024-05-16 20:05:50.718727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:03.739 #44 NEW cov: 12086 ft: 15066 corp: 31/489b lim: 25 exec/s: 44 rss: 74Mb L: 24/24 MS: 1 InsertByte- 00:07:03.739 [2024-05-16 20:05:50.768695] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:03.739 [2024-05-16 20:05:50.768719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.739 [2024-05-16 20:05:50.768786] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:03.739 [2024-05-16 20:05:50.768796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.739 [2024-05-16 20:05:50.768847] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:03.739 [2024-05-16 20:05:50.768859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:03.739 [2024-05-16 20:05:50.768911] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:03.739 [2024-05-16 20:05:50.768923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:03.739 [2024-05-16 20:05:50.818785] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:03.739 [2024-05-16 20:05:50.818809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.739 [2024-05-16 20:05:50.818875] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:03.739 [2024-05-16 20:05:50.818886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.739 [2024-05-16 20:05:50.818937] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:03.739 [2024-05-16 20:05:50.818950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:03.739 [2024-05-16 20:05:50.819004] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:03.739 [2024-05-16 20:05:50.819017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:03.739 #46 NEW cov: 12086 ft: 15078 corp: 32/513b lim: 25 exec/s: 46 rss: 74Mb L: 24/24 MS: 2 InsertByte-PersAutoDict- DE: "\200~pBm\245\006\000"- 00:07:03.739 [2024-05-16 20:05:50.858744] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:03.739 [2024-05-16 20:05:50.858767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.740 [2024-05-16 20:05:50.858835] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:03.740 [2024-05-16 20:05:50.858848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.740 [2024-05-16 20:05:50.858898] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:03.740 [2024-05-16 20:05:50.858910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:04.000 #47 NEW cov: 12086 ft: 15089 corp: 33/528b lim: 25 exec/s: 47 rss: 74Mb L: 15/24 MS: 1 CrossOver- 00:07:04.000 [2024-05-16 20:05:50.908632] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:04.000 [2024-05-16 20:05:50.908656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.000 #48 NEW cov: 12086 ft: 15091 corp: 34/533b lim: 25 exec/s: 48 rss: 74Mb L: 5/24 MS: 1 ChangeByte- 00:07:04.000 [2024-05-16 20:05:50.958811] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:04.000 [2024-05-16 20:05:50.958836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.000 #49 NEW cov: 12086 ft: 15098 corp: 35/538b lim: 25 exec/s: 49 rss: 74Mb L: 5/24 MS: 1 EraseBytes- 00:07:04.000 [2024-05-16 20:05:50.999269] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:04.000 [2024-05-16 20:05:50.999293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.000 [2024-05-16 20:05:50.999345] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:04.000 [2024-05-16 20:05:50.999358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.000 [2024-05-16 20:05:50.999408] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:04.000 [2024-05-16 20:05:50.999420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:04.000 [2024-05-16 20:05:50.999474] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:04.000 [2024-05-16 20:05:50.999488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:04.000 #50 NEW cov: 12086 ft: 15138 corp: 36/561b lim: 25 exec/s: 50 rss: 74Mb L: 23/24 MS: 1 CMP- DE: "G\000\000\000\000\000\000\000"- 00:07:04.000 [2024-05-16 20:05:51.039320] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:04.000 [2024-05-16 20:05:51.039344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.000 [2024-05-16 20:05:51.039409] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:04.000 [2024-05-16 20:05:51.039422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.000 [2024-05-16 20:05:51.039480] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:04.000 [2024-05-16 20:05:51.039493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:04.000 #51 NEW cov: 12086 ft: 15156 corp: 37/580b lim: 25 exec/s: 51 rss: 74Mb L: 19/24 MS: 1 CrossOver- 00:07:04.000 [2024-05-16 20:05:51.079462] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:04.000 [2024-05-16 20:05:51.079487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.000 [2024-05-16 20:05:51.079535] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:04.000 [2024-05-16 20:05:51.079548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.000 [2024-05-16 20:05:51.079611] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:04.000 [2024-05-16 20:05:51.079624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:04.000 [2024-05-16 20:05:51.079675] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:04.000 [2024-05-16 20:05:51.079686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:04.000 #52 NEW cov: 12086 ft: 15158 corp: 38/601b lim: 25 exec/s: 52 rss: 74Mb L: 21/24 MS: 1 CrossOver- 00:07:04.000 [2024-05-16 20:05:51.129277] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:04.000 [2024-05-16 20:05:51.129300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.260 #53 NEW cov: 12086 ft: 15162 corp: 39/608b lim: 25 exec/s: 53 rss: 74Mb L: 7/24 MS: 1 CrossOver- 00:07:04.260 [2024-05-16 20:05:51.169901] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:04.260 [2024-05-16 20:05:51.169925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.260 [2024-05-16 20:05:51.169973] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:04.260 [2024-05-16 20:05:51.169985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.260 [2024-05-16 20:05:51.170032] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:04.260 [2024-05-16 20:05:51.170045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:04.260 [2024-05-16 20:05:51.170093] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:04.260 [2024-05-16 20:05:51.170106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:04.260 [2024-05-16 20:05:51.170157] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:04.260 [2024-05-16 20:05:51.170170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:04.260 #54 NEW cov: 12086 ft: 15195 corp: 40/633b lim: 25 exec/s: 54 rss: 74Mb L: 25/25 MS: 1 InsertByte- 00:07:04.260 [2024-05-16 20:05:51.209869] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:04.260 [2024-05-16 20:05:51.209893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.260 [2024-05-16 20:05:51.209943] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:04.260 [2024-05-16 20:05:51.209958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.260 [2024-05-16 20:05:51.210008] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:04.260 [2024-05-16 20:05:51.210020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:04.260 [2024-05-16 20:05:51.210070] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:04.260 [2024-05-16 20:05:51.210082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:04.260 #55 NEW cov: 12086 ft: 15211 corp: 41/654b lim: 25 exec/s: 55 rss: 74Mb L: 21/25 MS: 1 CMP- DE: "\000\000\000\000\001\000\000\000"- 00:07:04.260 [2024-05-16 20:05:51.249898] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:04.260 [2024-05-16 20:05:51.249921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.260 [2024-05-16 20:05:51.249989] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:04.260 [2024-05-16 20:05:51.250003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.260 [2024-05-16 20:05:51.250057] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:04.260 [2024-05-16 20:05:51.250070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:04.260 #56 NEW cov: 12086 ft: 15215 corp: 42/673b lim: 25 exec/s: 28 rss: 74Mb L: 19/25 MS: 1 InsertByte- 00:07:04.260 #56 DONE cov: 12086 ft: 15215 corp: 42/673b lim: 25 exec/s: 28 rss: 74Mb 00:07:04.260 ###### Recommended dictionary. ###### 00:07:04.260 "\027\001\000\000" # Uses: 1 00:07:04.260 "\200~pBm\245\006\000" # Uses: 1 00:07:04.260 "G\000\000\000\000\000\000\000" # Uses: 0 00:07:04.260 "\000\000\000\000\001\000\000\000" # Uses: 0 00:07:04.260 ###### End of recommended dictionary. ###### 00:07:04.260 Done 56 runs in 2 second(s) 00:07:04.260 [2024-05-16 20:05:51.270320] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:04.260 20:05:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_23.conf /var/tmp/suppress_nvmf_fuzz 00:07:04.260 20:05:51 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:04.260 20:05:51 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:04.260 20:05:51 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 24 1 0x1 00:07:04.260 20:05:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=24 00:07:04.260 20:05:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:04.260 20:05:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:04.260 20:05:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:04.260 20:05:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_24.conf 00:07:04.260 20:05:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:04.260 20:05:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:04.260 20:05:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 24 00:07:04.260 20:05:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4424 00:07:04.260 20:05:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:04.520 20:05:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' 00:07:04.520 20:05:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4424"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:04.520 20:05:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:04.520 20:05:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:04.520 20:05:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' -c /tmp/fuzz_json_24.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 -Z 24 00:07:04.520 [2024-05-16 20:05:51.440130] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:07:04.520 [2024-05-16 20:05:51.440208] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1672907 ] 00:07:04.520 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.520 [2024-05-16 20:05:51.599371] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.520 [2024-05-16 20:05:51.663630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.779 [2024-05-16 20:05:51.722170] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:04.779 [2024-05-16 20:05:51.738136] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:04.779 [2024-05-16 20:05:51.738505] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4424 *** 00:07:04.779 INFO: Running with entropic power schedule (0xFF, 100). 00:07:04.780 INFO: Seed: 2258101290 00:07:04.780 INFO: Loaded 1 modules (357283 inline 8-bit counters): 357283 [0x299c0cc, 0x29f346f), 00:07:04.780 INFO: Loaded 1 PC tables (357283 PCs): 357283 [0x29f3470,0x2f66ea0), 00:07:04.780 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:04.780 INFO: A corpus is not provided, starting from an empty corpus 00:07:04.780 #2 INITED exec/s: 0 rss: 64Mb 00:07:04.780 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:04.780 This may also happen if the target rejected all inputs we tried so far 00:07:04.780 [2024-05-16 20:05:51.783063] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2046820352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.780 [2024-05-16 20:05:51.783093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.780 [2024-05-16 20:05:51.783140] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.780 [2024-05-16 20:05:51.783156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.040 NEW_FUNC[1/687]: 0x4ae7c0 in fuzz_nvm_compare_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:685 00:07:05.040 NEW_FUNC[2/687]: 0x4bf420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:05.040 #9 NEW cov: 11914 ft: 11914 corp: 2/43b lim: 100 exec/s: 0 rss: 71Mb L: 42/42 MS: 2 ChangeByte-InsertRepeatedBytes- 00:07:05.040 [2024-05-16 20:05:51.953462] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2046820352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.040 [2024-05-16 20:05:51.953497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.040 [2024-05-16 20:05:51.953543] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.040 [2024-05-16 20:05:51.953558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.040 #10 NEW cov: 12044 ft: 12475 corp: 3/85b lim: 100 exec/s: 0 rss: 71Mb L: 42/42 MS: 1 ChangeBit- 00:07:05.040 [2024-05-16 20:05:52.033578] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2046820352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.040 [2024-05-16 20:05:52.033609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.040 [2024-05-16 20:05:52.033654] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.040 [2024-05-16 20:05:52.033670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.040 #11 NEW cov: 12050 ft: 12727 corp: 4/127b lim: 100 exec/s: 0 rss: 71Mb L: 42/42 MS: 1 ShuffleBytes- 00:07:05.040 [2024-05-16 20:05:52.113782] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2046820352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.040 [2024-05-16 20:05:52.113809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.040 [2024-05-16 20:05:52.113840] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.040 [2024-05-16 20:05:52.113855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.040 #17 NEW cov: 12135 ft: 12987 corp: 5/169b lim: 100 exec/s: 0 rss: 71Mb L: 42/42 MS: 1 ChangeByte- 00:07:05.300 [2024-05-16 20:05:52.194034] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2046820352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.300 [2024-05-16 20:05:52.194061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.300 [2024-05-16 20:05:52.194092] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.300 [2024-05-16 20:05:52.194106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.300 #18 NEW cov: 12135 ft: 13109 corp: 6/212b lim: 100 exec/s: 0 rss: 71Mb L: 43/43 MS: 1 InsertByte- 00:07:05.300 [2024-05-16 20:05:52.244109] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2046820352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.300 [2024-05-16 20:05:52.244136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.300 [2024-05-16 20:05:52.244182] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.300 [2024-05-16 20:05:52.244197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.300 #19 NEW cov: 12135 ft: 13236 corp: 7/254b lim: 100 exec/s: 0 rss: 72Mb L: 42/43 MS: 1 ChangeBit- 00:07:05.300 [2024-05-16 20:05:52.294243] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2046820352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.300 [2024-05-16 20:05:52.294270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.300 [2024-05-16 20:05:52.294314] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.300 [2024-05-16 20:05:52.294329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.300 #20 NEW cov: 12135 ft: 13359 corp: 8/296b lim: 100 exec/s: 0 rss: 72Mb L: 42/43 MS: 1 CrossOver- 00:07:05.300 [2024-05-16 20:05:52.374473] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2046820352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.300 [2024-05-16 20:05:52.374499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.300 [2024-05-16 20:05:52.374547] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:34339947158700032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.300 [2024-05-16 20:05:52.374562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.300 #21 NEW cov: 12135 ft: 13368 corp: 9/338b lim: 100 exec/s: 0 rss: 72Mb L: 42/43 MS: 1 CrossOver- 00:07:05.300 [2024-05-16 20:05:52.424567] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2046820352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.300 [2024-05-16 20:05:52.424593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.300 [2024-05-16 20:05:52.424638] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:34339947158700032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.300 [2024-05-16 20:05:52.424653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.559 #22 NEW cov: 12135 ft: 13419 corp: 10/380b lim: 100 exec/s: 0 rss: 72Mb L: 42/43 MS: 1 ShuffleBytes- 00:07:05.559 [2024-05-16 20:05:52.504916] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2046820352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.559 [2024-05-16 20:05:52.504943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.560 [2024-05-16 20:05:52.504986] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.560 [2024-05-16 20:05:52.505001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.560 [2024-05-16 20:05:52.505028] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.560 [2024-05-16 20:05:52.505041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:05.560 [2024-05-16 20:05:52.505067] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:8388608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.560 [2024-05-16 20:05:52.505080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:05.560 #23 NEW cov: 12135 ft: 13851 corp: 11/464b lim: 100 exec/s: 0 rss: 72Mb L: 84/84 MS: 1 CrossOver- 00:07:05.560 [2024-05-16 20:05:52.585030] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2046820352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.560 [2024-05-16 20:05:52.585058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.560 [2024-05-16 20:05:52.585102] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:1056561954816 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.560 [2024-05-16 20:05:52.585117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.560 #24 NEW cov: 12135 ft: 13872 corp: 12/506b lim: 100 exec/s: 0 rss: 72Mb L: 42/84 MS: 1 ChangeBinInt- 00:07:05.560 [2024-05-16 20:05:52.635149] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2046820352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.560 [2024-05-16 20:05:52.635178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.560 [2024-05-16 20:05:52.635222] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:1056561954816 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.560 [2024-05-16 20:05:52.635238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.560 NEW_FUNC[1/1]: 0x1a6ef60 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:05.560 #25 NEW cov: 12152 ft: 13899 corp: 13/548b lim: 100 exec/s: 0 rss: 72Mb L: 42/84 MS: 1 CrossOver- 00:07:05.819 [2024-05-16 20:05:52.715392] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2046820352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.819 [2024-05-16 20:05:52.715422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.819 [2024-05-16 20:05:52.715474] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:34339947158700032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.819 [2024-05-16 20:05:52.715490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.819 [2024-05-16 20:05:52.715517] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:10344644715844964239 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.819 [2024-05-16 20:05:52.715531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:05.819 #26 NEW cov: 12152 ft: 14250 corp: 14/617b lim: 100 exec/s: 0 rss: 72Mb L: 69/84 MS: 1 InsertRepeatedBytes- 00:07:05.819 [2024-05-16 20:05:52.775480] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9223372038901596160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.819 [2024-05-16 20:05:52.775508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.819 #27 NEW cov: 12152 ft: 15055 corp: 15/644b lim: 100 exec/s: 27 rss: 72Mb L: 27/84 MS: 1 EraseBytes- 00:07:05.819 [2024-05-16 20:05:52.835716] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2046820352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.819 [2024-05-16 20:05:52.835744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.819 [2024-05-16 20:05:52.835788] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:134140418588672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.819 [2024-05-16 20:05:52.835803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.819 [2024-05-16 20:05:52.835830] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:10344644715844964239 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.819 [2024-05-16 20:05:52.835843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:05.819 #28 NEW cov: 12152 ft: 15106 corp: 16/714b lim: 100 exec/s: 28 rss: 72Mb L: 70/84 MS: 1 InsertByte- 00:07:05.819 [2024-05-16 20:05:52.915812] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9223372038901596160 len:28 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.819 [2024-05-16 20:05:52.915841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.078 #29 NEW cov: 12152 ft: 15125 corp: 17/741b lim: 100 exec/s: 29 rss: 72Mb L: 27/84 MS: 1 ChangeBinInt- 00:07:06.079 [2024-05-16 20:05:52.996227] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2046820352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.079 [2024-05-16 20:05:52.996255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.079 [2024-05-16 20:05:52.996299] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:134140418588672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.079 [2024-05-16 20:05:52.996314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.079 [2024-05-16 20:05:52.996344] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:10344644715844964239 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.079 [2024-05-16 20:05:52.996358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.079 [2024-05-16 20:05:52.996383] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:10344644715844964239 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.079 [2024-05-16 20:05:52.996397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:06.079 #30 NEW cov: 12152 ft: 15158 corp: 18/834b lim: 100 exec/s: 30 rss: 72Mb L: 93/93 MS: 1 CopyPart- 00:07:06.079 [2024-05-16 20:05:53.076379] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2046820352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.079 [2024-05-16 20:05:53.076408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.079 [2024-05-16 20:05:53.076452] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:32768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.079 [2024-05-16 20:05:53.076474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.079 [2024-05-16 20:05:53.076503] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:69524314952564736 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.079 [2024-05-16 20:05:53.076517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.079 #31 NEW cov: 12152 ft: 15164 corp: 19/894b lim: 100 exec/s: 31 rss: 72Mb L: 60/93 MS: 1 CrossOver- 00:07:06.079 [2024-05-16 20:05:53.156596] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2046820352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.079 [2024-05-16 20:05:53.156623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.079 [2024-05-16 20:05:53.156668] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:134140418588672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.079 [2024-05-16 20:05:53.156683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.079 [2024-05-16 20:05:53.156711] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:10344644715844964239 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.079 [2024-05-16 20:05:53.156724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.079 #32 NEW cov: 12152 ft: 15199 corp: 20/964b lim: 100 exec/s: 32 rss: 72Mb L: 70/93 MS: 1 CopyPart- 00:07:06.079 [2024-05-16 20:05:53.216702] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2046820352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.079 [2024-05-16 20:05:53.216728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.079 [2024-05-16 20:05:53.216773] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:34339947158700032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.079 [2024-05-16 20:05:53.216788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.338 #33 NEW cov: 12152 ft: 15223 corp: 21/1006b lim: 100 exec/s: 33 rss: 72Mb L: 42/93 MS: 1 ShuffleBytes- 00:07:06.338 [2024-05-16 20:05:53.276803] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9223372038901596160 len:28 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.338 [2024-05-16 20:05:53.276833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.338 #34 NEW cov: 12152 ft: 15292 corp: 22/1033b lim: 100 exec/s: 34 rss: 72Mb L: 27/93 MS: 1 CMP- DE: "\003\000\000\000"- 00:07:06.338 [2024-05-16 20:05:53.356975] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9223372038901596160 len:28 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.338 [2024-05-16 20:05:53.357002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.338 #35 NEW cov: 12152 ft: 15316 corp: 23/1056b lim: 100 exec/s: 35 rss: 72Mb L: 23/93 MS: 1 EraseBytes- 00:07:06.338 [2024-05-16 20:05:53.437210] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9223372038901596160 len:28 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.338 [2024-05-16 20:05:53.437237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.597 #36 NEW cov: 12152 ft: 15322 corp: 24/1079b lim: 100 exec/s: 36 rss: 73Mb L: 23/93 MS: 1 ChangeBit- 00:07:06.597 [2024-05-16 20:05:53.517549] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2046820352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.597 [2024-05-16 20:05:53.517576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.597 [2024-05-16 20:05:53.517620] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:34339947158700032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.597 [2024-05-16 20:05:53.517634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.597 [2024-05-16 20:05:53.517662] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:10344644715844964239 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.597 [2024-05-16 20:05:53.517675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.597 #37 NEW cov: 12152 ft: 15359 corp: 25/1148b lim: 100 exec/s: 37 rss: 73Mb L: 69/93 MS: 1 ShuffleBytes- 00:07:06.597 [2024-05-16 20:05:53.567615] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2046820352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.597 [2024-05-16 20:05:53.567644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.597 [2024-05-16 20:05:53.567691] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.597 [2024-05-16 20:05:53.567707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.597 #38 NEW cov: 12152 ft: 15396 corp: 26/1190b lim: 100 exec/s: 38 rss: 73Mb L: 42/93 MS: 1 ChangeBinInt- 00:07:06.597 [2024-05-16 20:05:53.647769] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2046820352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.597 [2024-05-16 20:05:53.647796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.597 #39 NEW cov: 12159 ft: 15413 corp: 27/1215b lim: 100 exec/s: 39 rss: 73Mb L: 25/93 MS: 1 EraseBytes- 00:07:06.597 [2024-05-16 20:05:53.708093] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2046820352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.597 [2024-05-16 20:05:53.708120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.597 [2024-05-16 20:05:53.708165] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:46261 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.597 [2024-05-16 20:05:53.708179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.597 [2024-05-16 20:05:53.708210] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:13021231110853801140 len:46261 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.597 [2024-05-16 20:05:53.708223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.597 [2024-05-16 20:05:53.708249] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:13021231110853801140 len:46261 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.597 [2024-05-16 20:05:53.708262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:06.856 #40 NEW cov: 12159 ft: 15431 corp: 28/1300b lim: 100 exec/s: 40 rss: 73Mb L: 85/93 MS: 1 InsertRepeatedBytes- 00:07:06.856 [2024-05-16 20:05:53.768112] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2046820352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.856 [2024-05-16 20:05:53.768139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.856 [2024-05-16 20:05:53.768183] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.856 [2024-05-16 20:05:53.768198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.856 #41 NEW cov: 12159 ft: 15453 corp: 29/1342b lim: 100 exec/s: 20 rss: 73Mb L: 42/93 MS: 1 ChangeBinInt- 00:07:06.856 #41 DONE cov: 12159 ft: 15453 corp: 29/1342b lim: 100 exec/s: 20 rss: 73Mb 00:07:06.856 ###### Recommended dictionary. ###### 00:07:06.856 "\003\000\000\000" # Uses: 0 00:07:06.856 ###### End of recommended dictionary. ###### 00:07:06.856 Done 41 runs in 2 second(s) 00:07:06.856 [2024-05-16 20:05:53.802592] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:06.856 20:05:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_24.conf /var/tmp/suppress_nvmf_fuzz 00:07:06.856 20:05:53 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:06.856 20:05:53 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:06.856 20:05:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@79 -- # trap - SIGINT SIGTERM EXIT 00:07:06.856 00:07:06.856 real 1m3.653s 00:07:06.856 user 1m45.412s 00:07:06.856 sys 0m6.141s 00:07:06.856 20:05:53 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:06.856 20:05:53 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:06.856 ************************************ 00:07:06.856 END TEST nvmf_fuzz 00:07:06.856 ************************************ 00:07:06.856 20:05:53 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:07:06.856 20:05:53 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:07:06.856 20:05:53 llvm_fuzz -- fuzz/llvm.sh@63 -- # run_test vfio_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:07:06.856 20:05:53 llvm_fuzz -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:06.856 20:05:53 llvm_fuzz -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:06.856 20:05:53 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:06.856 ************************************ 00:07:06.856 START TEST vfio_fuzz 00:07:06.856 ************************************ 00:07:06.856 20:05:53 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:07:07.118 * Looking for test storage... 00:07:07.118 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:07.118 20:05:54 llvm_fuzz.vfio_fuzz -- vfio/run.sh@64 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:07:07.118 20:05:54 llvm_fuzz.vfio_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:07:07.118 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:07.118 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@34 -- # set -e 00:07:07.118 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:07.118 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:07.118 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:07:07.118 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:07.118 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:07:07.118 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:07.118 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:07.118 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:07.118 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:07.118 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:07.118 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:07.118 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:07.118 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:07.118 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:07.118 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:07.118 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:07.118 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:07.118 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:07.118 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:07.118 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:07.118 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:07.118 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:07.118 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:07.118 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:07.118 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:07.118 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:07.118 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:07.118 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:07.118 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:07.118 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:07.118 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:07.118 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:07.118 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:07.118 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:07.118 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:07.118 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:07.118 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:07.118 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:07.118 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=/usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:07:07.118 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER=y 00:07:07.118 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:07.118 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:07.118 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:07.118 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:07.118 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:07.118 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:07.118 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:07.118 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:07.118 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:07.118 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:07.118 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:07.118 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:07.118 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:07.118 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:07.118 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:07.118 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:07.118 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:07.118 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:07.118 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:07.118 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:07.118 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:07.118 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:07.118 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:07.118 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:07.119 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:07.119 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:07.119 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:07:07.119 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:07.119 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:07.119 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:07.119 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:07:07.119 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=n 00:07:07.119 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:07.119 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:07.119 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:07.119 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:07.119 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:07.119 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:07.119 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:07.119 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:07.119 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:07.119 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:07:07.119 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:07.119 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:07.119 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:07.119 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:07.119 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:07.119 20:05:54 llvm_fuzz.vfio_fuzz -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:07.119 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:07.119 20:05:54 llvm_fuzz.vfio_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:07.119 20:05:54 llvm_fuzz.vfio_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:07.119 20:05:54 llvm_fuzz.vfio_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:07.119 20:05:54 llvm_fuzz.vfio_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:07.119 20:05:54 llvm_fuzz.vfio_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:07.119 20:05:54 llvm_fuzz.vfio_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:07.119 20:05:54 llvm_fuzz.vfio_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:07.119 20:05:54 llvm_fuzz.vfio_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:07.119 20:05:54 llvm_fuzz.vfio_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:07.119 20:05:54 llvm_fuzz.vfio_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:07.119 20:05:54 llvm_fuzz.vfio_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:07.119 20:05:54 llvm_fuzz.vfio_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:07.119 20:05:54 llvm_fuzz.vfio_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:07.119 20:05:54 llvm_fuzz.vfio_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:07:07.119 20:05:54 llvm_fuzz.vfio_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:07.119 #define SPDK_CONFIG_H 00:07:07.119 #define SPDK_CONFIG_APPS 1 00:07:07.119 #define SPDK_CONFIG_ARCH native 00:07:07.119 #undef SPDK_CONFIG_ASAN 00:07:07.119 #undef SPDK_CONFIG_AVAHI 00:07:07.119 #undef SPDK_CONFIG_CET 00:07:07.119 #define SPDK_CONFIG_COVERAGE 1 00:07:07.119 #define SPDK_CONFIG_CROSS_PREFIX 00:07:07.119 #undef SPDK_CONFIG_CRYPTO 00:07:07.119 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:07.119 #undef SPDK_CONFIG_CUSTOMOCF 00:07:07.119 #undef SPDK_CONFIG_DAOS 00:07:07.119 #define SPDK_CONFIG_DAOS_DIR 00:07:07.119 #define SPDK_CONFIG_DEBUG 1 00:07:07.119 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:07.119 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:07.119 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:07.119 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:07.119 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:07.119 #undef SPDK_CONFIG_DPDK_UADK 00:07:07.119 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:07.119 #define SPDK_CONFIG_EXAMPLES 1 00:07:07.119 #undef SPDK_CONFIG_FC 00:07:07.119 #define SPDK_CONFIG_FC_PATH 00:07:07.119 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:07.119 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:07.119 #undef SPDK_CONFIG_FUSE 00:07:07.119 #define SPDK_CONFIG_FUZZER 1 00:07:07.119 #define SPDK_CONFIG_FUZZER_LIB /usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:07:07.119 #undef SPDK_CONFIG_GOLANG 00:07:07.119 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:07.119 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:07.119 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:07.119 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:07:07.119 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:07.119 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:07.119 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:07.119 #define SPDK_CONFIG_IDXD 1 00:07:07.119 #undef SPDK_CONFIG_IDXD_KERNEL 00:07:07.119 #undef SPDK_CONFIG_IPSEC_MB 00:07:07.119 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:07.119 #define SPDK_CONFIG_ISAL 1 00:07:07.119 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:07.119 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:07.119 #define SPDK_CONFIG_LIBDIR 00:07:07.119 #undef SPDK_CONFIG_LTO 00:07:07.119 #define SPDK_CONFIG_MAX_LCORES 00:07:07.119 #define SPDK_CONFIG_NVME_CUSE 1 00:07:07.119 #undef SPDK_CONFIG_OCF 00:07:07.119 #define SPDK_CONFIG_OCF_PATH 00:07:07.119 #define SPDK_CONFIG_OPENSSL_PATH 00:07:07.119 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:07.119 #define SPDK_CONFIG_PGO_DIR 00:07:07.119 #undef SPDK_CONFIG_PGO_USE 00:07:07.119 #define SPDK_CONFIG_PREFIX /usr/local 00:07:07.119 #undef SPDK_CONFIG_RAID5F 00:07:07.119 #undef SPDK_CONFIG_RBD 00:07:07.119 #define SPDK_CONFIG_RDMA 1 00:07:07.119 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:07.119 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:07.119 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:07.119 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:07.119 #undef SPDK_CONFIG_SHARED 00:07:07.119 #undef SPDK_CONFIG_SMA 00:07:07.119 #define SPDK_CONFIG_TESTS 1 00:07:07.119 #undef SPDK_CONFIG_TSAN 00:07:07.119 #define SPDK_CONFIG_UBLK 1 00:07:07.119 #define SPDK_CONFIG_UBSAN 1 00:07:07.119 #undef SPDK_CONFIG_UNIT_TESTS 00:07:07.119 #undef SPDK_CONFIG_URING 00:07:07.119 #define SPDK_CONFIG_URING_PATH 00:07:07.119 #undef SPDK_CONFIG_URING_ZNS 00:07:07.119 #undef SPDK_CONFIG_USDT 00:07:07.119 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:07.119 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:07.119 #define SPDK_CONFIG_VFIO_USER 1 00:07:07.119 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:07.119 #define SPDK_CONFIG_VHOST 1 00:07:07.119 #define SPDK_CONFIG_VIRTIO 1 00:07:07.119 #undef SPDK_CONFIG_VTUNE 00:07:07.119 #define SPDK_CONFIG_VTUNE_DIR 00:07:07.119 #define SPDK_CONFIG_WERROR 1 00:07:07.119 #define SPDK_CONFIG_WPDK_DIR 00:07:07.119 #undef SPDK_CONFIG_XNVME 00:07:07.119 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:07.119 20:05:54 llvm_fuzz.vfio_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:07.119 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:07:07.119 20:05:54 llvm_fuzz.vfio_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:07.119 20:05:54 llvm_fuzz.vfio_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:07.119 20:05:54 llvm_fuzz.vfio_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:07.119 20:05:54 llvm_fuzz.vfio_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.119 20:05:54 llvm_fuzz.vfio_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.119 20:05:54 llvm_fuzz.vfio_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.119 20:05:54 llvm_fuzz.vfio_fuzz -- paths/export.sh@5 -- # export PATH 00:07:07.119 20:05:54 llvm_fuzz.vfio_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.119 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:07.119 20:05:54 llvm_fuzz.vfio_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:07.119 20:05:54 llvm_fuzz.vfio_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:07.119 20:05:54 llvm_fuzz.vfio_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:07.119 20:05:54 llvm_fuzz.vfio_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:07.119 20:05:54 llvm_fuzz.vfio_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:07.119 20:05:54 llvm_fuzz.vfio_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:07:07.119 20:05:54 llvm_fuzz.vfio_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:07:07.119 20:05:54 llvm_fuzz.vfio_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- pm/common@68 -- # uname -s 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- pm/common@68 -- # PM_OS=Linux 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- pm/common@76 -- # SUDO[0]= 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@57 -- # : 0 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@61 -- # : 0 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@63 -- # : 0 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@65 -- # : 1 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@67 -- # : 0 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@69 -- # : 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@71 -- # : 0 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@73 -- # : 0 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@75 -- # : 0 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@77 -- # : 0 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@79 -- # : 0 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@81 -- # : 0 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@83 -- # : 0 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@85 -- # : 0 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@87 -- # : 0 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@89 -- # : 0 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@91 -- # : 0 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@93 -- # : 0 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@95 -- # : 0 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@97 -- # : 1 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@99 -- # : 1 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@101 -- # : rdma 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@103 -- # : 0 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@105 -- # : 0 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@107 -- # : 0 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@109 -- # : 0 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@111 -- # : 0 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@113 -- # : 0 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@115 -- # : 0 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@117 -- # : 0 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@119 -- # : 0 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@121 -- # : 1 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@123 -- # : 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@125 -- # : 0 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@127 -- # : 0 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@129 -- # : 0 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@131 -- # : 0 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@133 -- # : 0 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@135 -- # : 0 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@137 -- # : 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@139 -- # : true 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@141 -- # : 0 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@143 -- # : 0 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@145 -- # : 0 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@147 -- # : 0 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@149 -- # : 0 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@151 -- # : 0 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@153 -- # : 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@155 -- # : 0 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@157 -- # : 0 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@159 -- # : 0 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@161 -- # : 0 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@163 -- # : 0 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@166 -- # : 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@168 -- # : 0 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@170 -- # : 0 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:07.120 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@199 -- # cat 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@255 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@255 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@262 -- # export valgrind= 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@262 -- # valgrind= 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@268 -- # uname -s 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@278 -- # MAKE=make 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j88 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@298 -- # TEST_MODE= 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@317 -- # [[ -z 1673353 ]] 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@317 -- # kill -0 1673353 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@330 -- # local mount target_dir 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.z5MhFS 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@354 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio /tmp/spdk.z5MhFS/tests/vfio /tmp/spdk.z5MhFS 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@326 -- # df -T 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_devtmpfs 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=67108864 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=67108864 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_root 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=overlay 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=91036938240 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=99792764928 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=8755826688 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=49891672064 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=49896382464 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=4710400 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=19952656384 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=19958554624 00:07:07.121 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=5898240 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=49895632896 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=49896382464 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=749568 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=9979269120 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=9979273216 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:07:07.122 * Looking for test storage... 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@367 -- # local target_space new_size 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@371 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@371 -- # mount=/ 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@373 -- # target_space=91036938240 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@379 -- # [[ overlay == tmpfs ]] 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@379 -- # [[ overlay == ramfs ]] 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@380 -- # new_size=10970419200 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:07.122 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@388 -- # return 0 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1678 -- # set -o errtrace 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1683 -- # true 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1685 -- # xtrace_fd 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@27 -- # exec 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@29 -- # exec 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@18 -- # set -x 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- vfio/run.sh@65 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/../common.sh 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- ../common.sh@8 -- # pids=() 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- vfio/run.sh@67 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- vfio/run.sh@68 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- vfio/run.sh@68 -- # fuzz_num=7 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- vfio/run.sh@69 -- # (( fuzz_num != 0 )) 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- vfio/run.sh@71 -- # trap 'cleanup /tmp/vfio-user-* /var/tmp/suppress_vfio_fuzz; exit 1' SIGINT SIGTERM EXIT 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- vfio/run.sh@74 -- # mem_size=0 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- vfio/run.sh@75 -- # [[ 1 -eq 1 ]] 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- vfio/run.sh@76 -- # start_llvm_fuzz_short 7 1 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- ../common.sh@69 -- # local fuzz_num=7 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- ../common.sh@70 -- # local time=1 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=0 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-0 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-0/domain/1 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-0/domain/2 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-0/fuzz_vfio_json.conf 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-0 /tmp/vfio-user-0/domain/1 /tmp/vfio-user-0/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-0/domain/1%; 00:07:07.122 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-0/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:07.122 20:05:54 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-0/domain/1 -c /tmp/vfio-user-0/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 -Y /tmp/vfio-user-0/domain/2 -r /tmp/vfio-user-0/spdk0.sock -Z 0 00:07:07.122 [2024-05-16 20:05:54.207356] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:07:07.122 [2024-05-16 20:05:54.207443] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1673440 ] 00:07:07.122 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.381 [2024-05-16 20:05:54.266284] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.381 [2024-05-16 20:05:54.342651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.639 [2024-05-16 20:05:54.530887] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:07.639 INFO: Running with entropic power schedule (0xFF, 100). 00:07:07.639 INFO: Seed: 755158845 00:07:07.639 INFO: Loaded 1 modules (354519 inline 8-bit counters): 354519 [0x295d8cc, 0x29b41a3), 00:07:07.639 INFO: Loaded 1 PC tables (354519 PCs): 354519 [0x29b41a8,0x2f1cf18), 00:07:07.639 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:07:07.639 INFO: A corpus is not provided, starting from an empty corpus 00:07:07.639 #2 INITED exec/s: 0 rss: 66Mb 00:07:07.639 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:07.639 This may also happen if the target rejected all inputs we tried so far 00:07:07.639 [2024-05-16 20:05:54.600346] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: enabling controller 00:07:07.898 NEW_FUNC[1/646]: 0x482740 in fuzz_vfio_user_region_rw /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:84 00:07:07.898 NEW_FUNC[2/646]: 0x488250 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:07.898 #7 NEW cov: 10920 ft: 10719 corp: 2/7b lim: 6 exec/s: 0 rss: 71Mb L: 6/6 MS: 5 CrossOver-InsertRepeatedBytes-ChangeBit-ChangeBit-InsertByte- 00:07:07.898 #8 NEW cov: 10935 ft: 13742 corp: 3/13b lim: 6 exec/s: 0 rss: 72Mb L: 6/6 MS: 1 ChangeBinInt- 00:07:08.157 #14 NEW cov: 10935 ft: 15546 corp: 4/19b lim: 6 exec/s: 0 rss: 73Mb L: 6/6 MS: 1 ChangeByte- 00:07:08.415 NEW_FUNC[1/1]: 0x1a3b490 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:08.415 #15 NEW cov: 10952 ft: 16542 corp: 5/25b lim: 6 exec/s: 0 rss: 74Mb L: 6/6 MS: 1 ChangeBinInt- 00:07:08.673 #16 NEW cov: 10952 ft: 17172 corp: 6/31b lim: 6 exec/s: 16 rss: 74Mb L: 6/6 MS: 1 ChangeBinInt- 00:07:08.673 #17 NEW cov: 10952 ft: 17460 corp: 7/37b lim: 6 exec/s: 17 rss: 74Mb L: 6/6 MS: 1 ChangeByte- 00:07:08.932 #18 NEW cov: 10952 ft: 17587 corp: 8/43b lim: 6 exec/s: 18 rss: 74Mb L: 6/6 MS: 1 ChangeBinInt- 00:07:09.190 #19 NEW cov: 10952 ft: 17895 corp: 9/49b lim: 6 exec/s: 19 rss: 74Mb L: 6/6 MS: 1 ChangeByte- 00:07:09.449 #23 NEW cov: 10952 ft: 17987 corp: 10/55b lim: 6 exec/s: 23 rss: 74Mb L: 6/6 MS: 4 EraseBytes-CrossOver-CopyPart-InsertRepeatedBytes- 00:07:09.449 #24 NEW cov: 10959 ft: 18201 corp: 11/61b lim: 6 exec/s: 24 rss: 74Mb L: 6/6 MS: 1 ChangeBinInt- 00:07:09.708 #25 NEW cov: 10959 ft: 18606 corp: 12/67b lim: 6 exec/s: 12 rss: 74Mb L: 6/6 MS: 1 ChangeByte- 00:07:09.708 #25 DONE cov: 10959 ft: 18606 corp: 12/67b lim: 6 exec/s: 12 rss: 74Mb 00:07:09.708 Done 25 runs in 2 second(s) 00:07:09.708 [2024-05-16 20:05:56.724630] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: disabling controller 00:07:09.708 [2024-05-16 20:05:56.774504] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:09.967 20:05:56 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-0 /var/tmp/suppress_vfio_fuzz 00:07:09.967 20:05:56 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:09.967 20:05:56 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:09.967 20:05:56 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:07:09.967 20:05:56 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=1 00:07:09.967 20:05:56 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:09.967 20:05:56 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:09.967 20:05:56 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:07:09.967 20:05:56 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-1 00:07:09.967 20:05:56 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-1/domain/1 00:07:09.967 20:05:56 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-1/domain/2 00:07:09.967 20:05:56 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-1/fuzz_vfio_json.conf 00:07:09.967 20:05:56 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:09.967 20:05:56 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:09.967 20:05:56 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-1 /tmp/vfio-user-1/domain/1 /tmp/vfio-user-1/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:07:09.967 20:05:56 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-1/domain/1%; 00:07:09.967 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-1/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:09.967 20:05:56 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:09.967 20:05:56 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:09.967 20:05:56 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-1/domain/1 -c /tmp/vfio-user-1/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 -Y /tmp/vfio-user-1/domain/2 -r /tmp/vfio-user-1/spdk1.sock -Z 1 00:07:09.967 [2024-05-16 20:05:57.012842] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:07:09.967 [2024-05-16 20:05:57.012902] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1673916 ] 00:07:09.967 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.967 [2024-05-16 20:05:57.068683] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.227 [2024-05-16 20:05:57.146548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.227 [2024-05-16 20:05:57.330857] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:10.227 INFO: Running with entropic power schedule (0xFF, 100). 00:07:10.227 INFO: Seed: 3555162188 00:07:10.227 INFO: Loaded 1 modules (354519 inline 8-bit counters): 354519 [0x295d8cc, 0x29b41a3), 00:07:10.227 INFO: Loaded 1 PC tables (354519 PCs): 354519 [0x29b41a8,0x2f1cf18), 00:07:10.227 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:07:10.227 INFO: A corpus is not provided, starting from an empty corpus 00:07:10.227 #2 INITED exec/s: 0 rss: 66Mb 00:07:10.227 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:10.227 This may also happen if the target rejected all inputs we tried so far 00:07:10.485 [2024-05-16 20:05:57.400525] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: enabling controller 00:07:10.485 [2024-05-16 20:05:57.461488] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:10.485 [2024-05-16 20:05:57.461508] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:10.485 [2024-05-16 20:05:57.461523] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:10.744 NEW_FUNC[1/648]: 0x482ce0 in fuzz_vfio_user_version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:71 00:07:10.744 NEW_FUNC[2/648]: 0x488250 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:10.744 #62 NEW cov: 10917 ft: 10877 corp: 2/5b lim: 4 exec/s: 0 rss: 72Mb L: 4/4 MS: 5 InsertByte-ChangeBit-CopyPart-InsertByte-InsertByte- 00:07:10.744 [2024-05-16 20:05:57.738222] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:10.744 [2024-05-16 20:05:57.738258] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:10.744 [2024-05-16 20:05:57.738273] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:10.744 #68 NEW cov: 10931 ft: 13978 corp: 3/9b lim: 4 exec/s: 0 rss: 74Mb L: 4/4 MS: 1 ChangeByte- 00:07:11.003 [2024-05-16 20:05:57.910224] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:11.003 [2024-05-16 20:05:57.910245] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:11.003 [2024-05-16 20:05:57.910259] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:11.003 #78 NEW cov: 10931 ft: 14752 corp: 4/13b lim: 4 exec/s: 0 rss: 74Mb L: 4/4 MS: 5 ShuffleBytes-InsertByte-CopyPart-CrossOver-InsertByte- 00:07:11.003 [2024-05-16 20:05:58.089770] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:11.003 [2024-05-16 20:05:58.089791] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:11.003 [2024-05-16 20:05:58.089804] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:11.262 NEW_FUNC[1/1]: 0x1a3b490 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:11.262 #82 NEW cov: 10948 ft: 15054 corp: 5/17b lim: 4 exec/s: 0 rss: 74Mb L: 4/4 MS: 4 CMP-EraseBytes-ShuffleBytes-CopyPart- DE: "\001\000"- 00:07:11.262 [2024-05-16 20:05:58.259206] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:11.262 [2024-05-16 20:05:58.259228] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:11.262 [2024-05-16 20:05:58.259242] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:11.262 #83 NEW cov: 10948 ft: 16025 corp: 6/21b lim: 4 exec/s: 0 rss: 74Mb L: 4/4 MS: 1 CrossOver- 00:07:11.521 [2024-05-16 20:05:58.429451] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:11.521 [2024-05-16 20:05:58.429479] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:11.521 [2024-05-16 20:05:58.429494] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:11.521 #89 NEW cov: 10948 ft: 16373 corp: 7/25b lim: 4 exec/s: 89 rss: 74Mb L: 4/4 MS: 1 CrossOver- 00:07:11.521 [2024-05-16 20:05:58.604858] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:11.521 [2024-05-16 20:05:58.604879] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:11.521 [2024-05-16 20:05:58.604893] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:11.781 #93 NEW cov: 10948 ft: 16472 corp: 8/29b lim: 4 exec/s: 93 rss: 74Mb L: 4/4 MS: 4 ChangeByte-CrossOver-CopyPart-InsertByte- 00:07:11.781 [2024-05-16 20:05:58.777233] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:11.781 [2024-05-16 20:05:58.777254] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:11.781 [2024-05-16 20:05:58.777268] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:11.781 #99 NEW cov: 10948 ft: 17329 corp: 9/33b lim: 4 exec/s: 99 rss: 74Mb L: 4/4 MS: 1 CopyPart- 00:07:12.040 [2024-05-16 20:05:58.952913] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:12.040 [2024-05-16 20:05:58.952933] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:12.040 [2024-05-16 20:05:58.952947] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:12.040 #100 NEW cov: 10948 ft: 17385 corp: 10/37b lim: 4 exec/s: 100 rss: 74Mb L: 4/4 MS: 1 ShuffleBytes- 00:07:12.040 [2024-05-16 20:05:59.125291] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:12.040 [2024-05-16 20:05:59.125311] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:12.040 [2024-05-16 20:05:59.125324] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:12.298 #101 NEW cov: 10955 ft: 17696 corp: 11/41b lim: 4 exec/s: 101 rss: 74Mb L: 4/4 MS: 1 ShuffleBytes- 00:07:12.298 [2024-05-16 20:05:59.294568] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:12.298 [2024-05-16 20:05:59.294588] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:12.298 [2024-05-16 20:05:59.294601] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:12.298 #102 NEW cov: 10955 ft: 17860 corp: 12/45b lim: 4 exec/s: 51 rss: 74Mb L: 4/4 MS: 1 CMP- DE: "\001\000\000t"- 00:07:12.298 #102 DONE cov: 10955 ft: 17860 corp: 12/45b lim: 4 exec/s: 51 rss: 74Mb 00:07:12.298 ###### Recommended dictionary. ###### 00:07:12.298 "\001\000" # Uses: 0 00:07:12.298 "\001\000\000t" # Uses: 0 00:07:12.298 ###### End of recommended dictionary. ###### 00:07:12.298 Done 102 runs in 2 second(s) 00:07:12.298 [2024-05-16 20:05:59.415658] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: disabling controller 00:07:12.558 [2024-05-16 20:05:59.465998] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:12.558 20:05:59 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-1 /var/tmp/suppress_vfio_fuzz 00:07:12.558 20:05:59 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:12.558 20:05:59 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:12.558 20:05:59 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:07:12.558 20:05:59 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=2 00:07:12.558 20:05:59 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:12.558 20:05:59 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:12.558 20:05:59 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:12.558 20:05:59 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-2 00:07:12.558 20:05:59 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-2/domain/1 00:07:12.558 20:05:59 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-2/domain/2 00:07:12.558 20:05:59 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-2/fuzz_vfio_json.conf 00:07:12.558 20:05:59 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:12.558 20:05:59 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:12.558 20:05:59 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-2 /tmp/vfio-user-2/domain/1 /tmp/vfio-user-2/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:12.558 20:05:59 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-2/domain/1%; 00:07:12.558 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-2/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:12.558 20:05:59 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:12.558 20:05:59 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:12.558 20:05:59 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-2/domain/1 -c /tmp/vfio-user-2/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 -Y /tmp/vfio-user-2/domain/2 -r /tmp/vfio-user-2/spdk2.sock -Z 2 00:07:12.817 [2024-05-16 20:05:59.708372] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:07:12.817 [2024-05-16 20:05:59.708429] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1674310 ] 00:07:12.817 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.817 [2024-05-16 20:05:59.764591] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.817 [2024-05-16 20:05:59.842283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.076 [2024-05-16 20:06:00.031066] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:13.076 INFO: Running with entropic power schedule (0xFF, 100). 00:07:13.076 INFO: Seed: 1960185558 00:07:13.076 INFO: Loaded 1 modules (354519 inline 8-bit counters): 354519 [0x295d8cc, 0x29b41a3), 00:07:13.076 INFO: Loaded 1 PC tables (354519 PCs): 354519 [0x29b41a8,0x2f1cf18), 00:07:13.076 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:13.076 INFO: A corpus is not provided, starting from an empty corpus 00:07:13.076 #2 INITED exec/s: 0 rss: 66Mb 00:07:13.076 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:13.076 This may also happen if the target rejected all inputs we tried so far 00:07:13.076 [2024-05-16 20:06:00.105314] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: enabling controller 00:07:13.076 [2024-05-16 20:06:00.180597] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:13.335 NEW_FUNC[1/646]: 0x4836c0 in fuzz_vfio_user_get_region_info /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:103 00:07:13.335 NEW_FUNC[2/646]: 0x488250 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:13.335 #7 NEW cov: 10892 ft: 10782 corp: 2/9b lim: 8 exec/s: 0 rss: 72Mb L: 8/8 MS: 5 ShuffleBytes-ChangeByte-CrossOver-InsertRepeatedBytes-CopyPart- 00:07:13.620 [2024-05-16 20:06:00.499929] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:13.620 NEW_FUNC[1/1]: 0x142ede0 in sq_dbl_tailp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/vfio_user.c:572 00:07:13.620 #8 NEW cov: 10914 ft: 13828 corp: 3/17b lim: 8 exec/s: 0 rss: 73Mb L: 8/8 MS: 1 CopyPart- 00:07:13.620 [2024-05-16 20:06:00.700583] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:13.880 #9 NEW cov: 10914 ft: 15188 corp: 4/25b lim: 8 exec/s: 0 rss: 74Mb L: 8/8 MS: 1 ChangeBit- 00:07:13.880 [2024-05-16 20:06:00.890447] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:13.880 NEW_FUNC[1/1]: 0x1a3b490 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:13.880 #14 NEW cov: 10931 ft: 15564 corp: 5/33b lim: 8 exec/s: 0 rss: 74Mb L: 8/8 MS: 5 CrossOver-ChangeBinInt-ChangeBit-ShuffleBytes-CopyPart- 00:07:14.139 [2024-05-16 20:06:01.099449] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:14.139 #15 NEW cov: 10931 ft: 16846 corp: 6/41b lim: 8 exec/s: 15 rss: 74Mb L: 8/8 MS: 1 ChangeBit- 00:07:14.397 [2024-05-16 20:06:01.311086] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:14.397 #16 NEW cov: 10931 ft: 17565 corp: 7/49b lim: 8 exec/s: 16 rss: 74Mb L: 8/8 MS: 1 ChangeBit- 00:07:14.397 [2024-05-16 20:06:01.521542] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:14.656 #17 NEW cov: 10931 ft: 18055 corp: 8/57b lim: 8 exec/s: 17 rss: 74Mb L: 8/8 MS: 1 ShuffleBytes- 00:07:14.656 [2024-05-16 20:06:01.709207] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:14.915 #18 NEW cov: 10931 ft: 18197 corp: 9/65b lim: 8 exec/s: 18 rss: 74Mb L: 8/8 MS: 1 CrossOver- 00:07:14.915 [2024-05-16 20:06:01.918197] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:14.915 #19 NEW cov: 10938 ft: 18432 corp: 10/73b lim: 8 exec/s: 19 rss: 74Mb L: 8/8 MS: 1 CopyPart- 00:07:15.174 [2024-05-16 20:06:02.113865] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:15.174 #20 NEW cov: 10938 ft: 18596 corp: 11/81b lim: 8 exec/s: 10 rss: 74Mb L: 8/8 MS: 1 CrossOver- 00:07:15.174 #20 DONE cov: 10938 ft: 18596 corp: 11/81b lim: 8 exec/s: 10 rss: 74Mb 00:07:15.174 Done 20 runs in 2 second(s) 00:07:15.174 [2024-05-16 20:06:02.246652] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: disabling controller 00:07:15.174 [2024-05-16 20:06:02.296527] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:15.434 20:06:02 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-2 /var/tmp/suppress_vfio_fuzz 00:07:15.434 20:06:02 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:15.434 20:06:02 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:15.434 20:06:02 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:07:15.434 20:06:02 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=3 00:07:15.434 20:06:02 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:15.434 20:06:02 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:15.434 20:06:02 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:15.434 20:06:02 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-3 00:07:15.434 20:06:02 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-3/domain/1 00:07:15.434 20:06:02 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-3/domain/2 00:07:15.434 20:06:02 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-3/fuzz_vfio_json.conf 00:07:15.434 20:06:02 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:15.434 20:06:02 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:15.434 20:06:02 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-3 /tmp/vfio-user-3/domain/1 /tmp/vfio-user-3/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:15.434 20:06:02 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-3/domain/1%; 00:07:15.434 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-3/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:15.434 20:06:02 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:15.434 20:06:02 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:15.434 20:06:02 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-3/domain/1 -c /tmp/vfio-user-3/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 -Y /tmp/vfio-user-3/domain/2 -r /tmp/vfio-user-3/spdk3.sock -Z 3 00:07:15.434 [2024-05-16 20:06:02.540564] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:07:15.434 [2024-05-16 20:06:02.540627] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1674912 ] 00:07:15.434 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.693 [2024-05-16 20:06:02.595789] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.693 [2024-05-16 20:06:02.673096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.952 [2024-05-16 20:06:02.863065] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:15.952 INFO: Running with entropic power schedule (0xFF, 100). 00:07:15.952 INFO: Seed: 497216547 00:07:15.952 INFO: Loaded 1 modules (354519 inline 8-bit counters): 354519 [0x295d8cc, 0x29b41a3), 00:07:15.952 INFO: Loaded 1 PC tables (354519 PCs): 354519 [0x29b41a8,0x2f1cf18), 00:07:15.952 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:15.952 INFO: A corpus is not provided, starting from an empty corpus 00:07:15.952 #2 INITED exec/s: 0 rss: 66Mb 00:07:15.952 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:15.952 This may also happen if the target rejected all inputs we tried so far 00:07:15.952 [2024-05-16 20:06:02.932398] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: enabling controller 00:07:16.211 NEW_FUNC[1/647]: 0x483da0 in fuzz_vfio_user_dma_map /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:124 00:07:16.211 NEW_FUNC[2/647]: 0x488250 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:16.211 #70 NEW cov: 10908 ft: 10880 corp: 2/33b lim: 32 exec/s: 0 rss: 72Mb L: 32/32 MS: 3 CrossOver-InsertByte-InsertRepeatedBytes- 00:07:16.471 #90 NEW cov: 10922 ft: 13684 corp: 3/65b lim: 32 exec/s: 0 rss: 73Mb L: 32/32 MS: 5 InsertRepeatedBytes-CopyPart-ChangeBit-CopyPart-InsertByte- 00:07:16.471 #91 NEW cov: 10922 ft: 15527 corp: 4/97b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 1 ChangeBit- 00:07:16.730 NEW_FUNC[1/1]: 0x1a3b490 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:16.730 #92 NEW cov: 10939 ft: 16480 corp: 5/129b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 1 ChangeByte- 00:07:16.989 #103 NEW cov: 10939 ft: 17172 corp: 6/161b lim: 32 exec/s: 103 rss: 77Mb L: 32/32 MS: 1 ChangeBinInt- 00:07:16.989 #104 NEW cov: 10939 ft: 17361 corp: 7/193b lim: 32 exec/s: 104 rss: 77Mb L: 32/32 MS: 1 ChangeBit- 00:07:17.248 #105 NEW cov: 10939 ft: 17443 corp: 8/225b lim: 32 exec/s: 105 rss: 77Mb L: 32/32 MS: 1 ChangeBit- 00:07:17.508 #106 NEW cov: 10939 ft: 17507 corp: 9/257b lim: 32 exec/s: 106 rss: 77Mb L: 32/32 MS: 1 ChangeBinInt- 00:07:17.767 #107 NEW cov: 10939 ft: 17868 corp: 10/289b lim: 32 exec/s: 107 rss: 77Mb L: 32/32 MS: 1 ChangeBit- 00:07:17.767 #108 NEW cov: 10946 ft: 17937 corp: 11/321b lim: 32 exec/s: 108 rss: 77Mb L: 32/32 MS: 1 ChangeBit- 00:07:18.026 #112 NEW cov: 10946 ft: 17962 corp: 12/353b lim: 32 exec/s: 56 rss: 77Mb L: 32/32 MS: 4 CrossOver-ShuffleBytes-ChangeByte-CrossOver- 00:07:18.026 #112 DONE cov: 10946 ft: 17962 corp: 12/353b lim: 32 exec/s: 56 rss: 77Mb 00:07:18.026 Done 112 runs in 2 second(s) 00:07:18.026 [2024-05-16 20:06:05.016656] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: disabling controller 00:07:18.027 [2024-05-16 20:06:05.066686] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:18.286 20:06:05 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-3 /var/tmp/suppress_vfio_fuzz 00:07:18.286 20:06:05 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:18.286 20:06:05 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:18.286 20:06:05 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:07:18.286 20:06:05 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=4 00:07:18.286 20:06:05 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:18.286 20:06:05 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:18.286 20:06:05 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:07:18.286 20:06:05 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-4 00:07:18.286 20:06:05 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-4/domain/1 00:07:18.286 20:06:05 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-4/domain/2 00:07:18.286 20:06:05 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-4/fuzz_vfio_json.conf 00:07:18.286 20:06:05 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:18.286 20:06:05 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:18.286 20:06:05 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-4 /tmp/vfio-user-4/domain/1 /tmp/vfio-user-4/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:07:18.286 20:06:05 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-4/domain/1%; 00:07:18.286 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-4/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:18.286 20:06:05 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:18.286 20:06:05 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:18.286 20:06:05 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-4/domain/1 -c /tmp/vfio-user-4/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 -Y /tmp/vfio-user-4/domain/2 -r /tmp/vfio-user-4/spdk4.sock -Z 4 00:07:18.286 [2024-05-16 20:06:05.310862] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:07:18.286 [2024-05-16 20:06:05.310924] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1675355 ] 00:07:18.286 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.286 [2024-05-16 20:06:05.366410] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.546 [2024-05-16 20:06:05.445933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.546 [2024-05-16 20:06:05.618924] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:18.546 INFO: Running with entropic power schedule (0xFF, 100). 00:07:18.546 INFO: Seed: 3253209365 00:07:18.546 INFO: Loaded 1 modules (354519 inline 8-bit counters): 354519 [0x295d8cc, 0x29b41a3), 00:07:18.546 INFO: Loaded 1 PC tables (354519 PCs): 354519 [0x29b41a8,0x2f1cf18), 00:07:18.546 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:07:18.546 INFO: A corpus is not provided, starting from an empty corpus 00:07:18.546 #2 INITED exec/s: 0 rss: 66Mb 00:07:18.546 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:18.546 This may also happen if the target rejected all inputs we tried so far 00:07:18.546 [2024-05-16 20:06:05.688258] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: enabling controller 00:07:18.805 NEW_FUNC[1/647]: 0x484620 in fuzz_vfio_user_dma_unmap /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:144 00:07:18.805 NEW_FUNC[2/647]: 0x488250 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:18.805 #19 NEW cov: 10910 ft: 10875 corp: 2/33b lim: 32 exec/s: 0 rss: 72Mb L: 32/32 MS: 2 InsertRepeatedBytes-CopyPart- 00:07:19.064 #20 NEW cov: 10924 ft: 14550 corp: 3/65b lim: 32 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 ChangeBit- 00:07:19.323 #21 NEW cov: 10924 ft: 15235 corp: 4/97b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 1 CopyPart- 00:07:19.582 NEW_FUNC[1/1]: 0x1a3b490 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:19.582 #32 NEW cov: 10941 ft: 15921 corp: 5/129b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 1 ChangeBit- 00:07:19.582 #34 NEW cov: 10941 ft: 16298 corp: 6/161b lim: 32 exec/s: 34 rss: 74Mb L: 32/32 MS: 2 EraseBytes-InsertRepeatedBytes- 00:07:19.840 #40 NEW cov: 10941 ft: 16402 corp: 7/193b lim: 32 exec/s: 40 rss: 74Mb L: 32/32 MS: 1 ChangeBit- 00:07:20.346 #46 NEW cov: 10941 ft: 16436 corp: 8/225b lim: 32 exec/s: 46 rss: 74Mb L: 32/32 MS: 1 ChangeBit- 00:07:20.346 #47 NEW cov: 10941 ft: 17262 corp: 9/257b lim: 32 exec/s: 47 rss: 74Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:20.346 #48 NEW cov: 10941 ft: 17287 corp: 10/289b lim: 32 exec/s: 48 rss: 74Mb L: 32/32 MS: 1 CopyPart- 00:07:20.604 #49 NEW cov: 10948 ft: 17461 corp: 11/321b lim: 32 exec/s: 49 rss: 74Mb L: 32/32 MS: 1 ChangeBit- 00:07:20.604 #50 NEW cov: 10948 ft: 17624 corp: 12/353b lim: 32 exec/s: 25 rss: 74Mb L: 32/32 MS: 1 ChangeBit- 00:07:20.604 #50 DONE cov: 10948 ft: 17624 corp: 12/353b lim: 32 exec/s: 25 rss: 74Mb 00:07:20.604 Done 50 runs in 2 second(s) 00:07:20.863 [2024-05-16 20:06:07.768642] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: disabling controller 00:07:20.863 [2024-05-16 20:06:07.818520] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:21.123 20:06:08 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-4 /var/tmp/suppress_vfio_fuzz 00:07:21.123 20:06:08 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:21.123 20:06:08 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:21.123 20:06:08 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:07:21.123 20:06:08 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=5 00:07:21.123 20:06:08 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:21.123 20:06:08 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:21.123 20:06:08 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:07:21.123 20:06:08 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-5 00:07:21.123 20:06:08 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-5/domain/1 00:07:21.123 20:06:08 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-5/domain/2 00:07:21.123 20:06:08 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-5/fuzz_vfio_json.conf 00:07:21.123 20:06:08 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:21.123 20:06:08 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:21.123 20:06:08 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-5 /tmp/vfio-user-5/domain/1 /tmp/vfio-user-5/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:07:21.123 20:06:08 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-5/domain/1%; 00:07:21.123 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-5/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:21.123 20:06:08 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:21.123 20:06:08 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:21.123 20:06:08 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-5/domain/1 -c /tmp/vfio-user-5/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 -Y /tmp/vfio-user-5/domain/2 -r /tmp/vfio-user-5/spdk5.sock -Z 5 00:07:21.123 [2024-05-16 20:06:08.038512] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:07:21.123 [2024-05-16 20:06:08.038564] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1676181 ] 00:07:21.123 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.123 [2024-05-16 20:06:08.092839] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.123 [2024-05-16 20:06:08.167303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.382 [2024-05-16 20:06:08.331932] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:21.382 INFO: Running with entropic power schedule (0xFF, 100). 00:07:21.382 INFO: Seed: 1670239534 00:07:21.382 INFO: Loaded 1 modules (354519 inline 8-bit counters): 354519 [0x295d8cc, 0x29b41a3), 00:07:21.382 INFO: Loaded 1 PC tables (354519 PCs): 354519 [0x29b41a8,0x2f1cf18), 00:07:21.382 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:07:21.382 INFO: A corpus is not provided, starting from an empty corpus 00:07:21.382 #2 INITED exec/s: 0 rss: 65Mb 00:07:21.382 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:21.382 This may also happen if the target rejected all inputs we tried so far 00:07:21.382 [2024-05-16 20:06:08.401877] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: enabling controller 00:07:21.382 [2024-05-16 20:06:08.452488] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:21.382 [2024-05-16 20:06:08.452516] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:21.642 NEW_FUNC[1/647]: 0x485020 in fuzz_vfio_user_irq_set /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:171 00:07:21.642 NEW_FUNC[2/647]: 0x488250 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:21.642 #9 NEW cov: 10913 ft: 10685 corp: 2/14b lim: 13 exec/s: 0 rss: 72Mb L: 13/13 MS: 2 CMP-InsertRepeatedBytes- DE: "\212\201\352\010v\245\006\000"- 00:07:21.642 [2024-05-16 20:06:08.744495] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:21.642 [2024-05-16 20:06:08.744536] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:21.901 NEW_FUNC[1/1]: 0x13f7d20 in cq_tailp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/vfio_user.c:586 00:07:21.901 #10 NEW cov: 10929 ft: 13469 corp: 3/27b lim: 13 exec/s: 0 rss: 73Mb L: 13/13 MS: 1 ChangeBinInt- 00:07:21.901 [2024-05-16 20:06:08.939204] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:21.901 [2024-05-16 20:06:08.939232] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:21.901 #11 NEW cov: 10929 ft: 14010 corp: 4/40b lim: 13 exec/s: 0 rss: 74Mb L: 13/13 MS: 1 ShuffleBytes- 00:07:22.160 [2024-05-16 20:06:09.123232] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:22.160 [2024-05-16 20:06:09.123259] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:22.160 NEW_FUNC[1/1]: 0x1a3b490 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:22.160 #17 NEW cov: 10946 ft: 15138 corp: 5/53b lim: 13 exec/s: 0 rss: 74Mb L: 13/13 MS: 1 ChangeBinInt- 00:07:22.426 [2024-05-16 20:06:09.312021] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:22.426 [2024-05-16 20:06:09.312049] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:22.426 #18 NEW cov: 10946 ft: 15945 corp: 6/66b lim: 13 exec/s: 18 rss: 74Mb L: 13/13 MS: 1 ChangeBinInt- 00:07:22.426 [2024-05-16 20:06:09.513267] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:22.426 [2024-05-16 20:06:09.513296] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:22.688 #19 NEW cov: 10946 ft: 16197 corp: 7/79b lim: 13 exec/s: 19 rss: 74Mb L: 13/13 MS: 1 CopyPart- 00:07:22.688 [2024-05-16 20:06:09.697363] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:22.688 [2024-05-16 20:06:09.697390] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:22.688 #20 NEW cov: 10946 ft: 16819 corp: 8/92b lim: 13 exec/s: 20 rss: 74Mb L: 13/13 MS: 1 CrossOver- 00:07:22.947 [2024-05-16 20:06:09.884610] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:22.947 [2024-05-16 20:06:09.884638] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:22.947 #26 NEW cov: 10946 ft: 17276 corp: 9/105b lim: 13 exec/s: 26 rss: 74Mb L: 13/13 MS: 1 ChangeBinInt- 00:07:22.947 [2024-05-16 20:06:10.071757] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:22.947 [2024-05-16 20:06:10.071789] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:23.205 #27 NEW cov: 10953 ft: 17319 corp: 10/118b lim: 13 exec/s: 27 rss: 74Mb L: 13/13 MS: 1 CrossOver- 00:07:23.205 [2024-05-16 20:06:10.269528] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:23.205 [2024-05-16 20:06:10.269556] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:23.464 #28 NEW cov: 10953 ft: 17375 corp: 11/131b lim: 13 exec/s: 14 rss: 74Mb L: 13/13 MS: 1 CrossOver- 00:07:23.464 #28 DONE cov: 10953 ft: 17375 corp: 11/131b lim: 13 exec/s: 14 rss: 74Mb 00:07:23.464 ###### Recommended dictionary. ###### 00:07:23.464 "\212\201\352\010v\245\006\000" # Uses: 2 00:07:23.464 ###### End of recommended dictionary. ###### 00:07:23.464 Done 28 runs in 2 second(s) 00:07:23.464 [2024-05-16 20:06:10.403658] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: disabling controller 00:07:23.464 [2024-05-16 20:06:10.454146] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:23.723 20:06:10 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-5 /var/tmp/suppress_vfio_fuzz 00:07:23.723 20:06:10 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:23.723 20:06:10 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:23.723 20:06:10 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:07:23.723 20:06:10 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=6 00:07:23.723 20:06:10 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:23.723 20:06:10 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:23.723 20:06:10 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:07:23.723 20:06:10 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-6 00:07:23.723 20:06:10 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-6/domain/1 00:07:23.723 20:06:10 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-6/domain/2 00:07:23.723 20:06:10 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-6/fuzz_vfio_json.conf 00:07:23.723 20:06:10 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:23.723 20:06:10 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:23.723 20:06:10 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-6 /tmp/vfio-user-6/domain/1 /tmp/vfio-user-6/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:07:23.723 20:06:10 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-6/domain/1%; 00:07:23.723 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-6/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:23.723 20:06:10 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:23.723 20:06:10 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:23.723 20:06:10 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-6/domain/1 -c /tmp/vfio-user-6/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 -Y /tmp/vfio-user-6/domain/2 -r /tmp/vfio-user-6/spdk6.sock -Z 6 00:07:23.723 [2024-05-16 20:06:10.708816] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:07:23.723 [2024-05-16 20:06:10.708900] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1676602 ] 00:07:23.723 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.723 [2024-05-16 20:06:10.769877] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.723 [2024-05-16 20:06:10.848358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.982 [2024-05-16 20:06:11.028934] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:23.982 INFO: Running with entropic power schedule (0xFF, 100). 00:07:23.982 INFO: Seed: 74277028 00:07:23.982 INFO: Loaded 1 modules (354519 inline 8-bit counters): 354519 [0x295d8cc, 0x29b41a3), 00:07:23.982 INFO: Loaded 1 PC tables (354519 PCs): 354519 [0x29b41a8,0x2f1cf18), 00:07:23.982 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:07:23.982 INFO: A corpus is not provided, starting from an empty corpus 00:07:23.982 #2 INITED exec/s: 0 rss: 66Mb 00:07:23.982 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:23.982 This may also happen if the target rejected all inputs we tried so far 00:07:23.982 [2024-05-16 20:06:11.097420] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: enabling controller 00:07:24.240 [2024-05-16 20:06:11.156490] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:24.240 [2024-05-16 20:06:11.156518] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:24.240 NEW_FUNC[1/644]: 0x485d10 in fuzz_vfio_user_set_msix /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:190 00:07:24.240 NEW_FUNC[2/644]: 0x488250 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:24.240 #27 NEW cov: 10781 ft: 10879 corp: 2/10b lim: 9 exec/s: 0 rss: 71Mb L: 9/9 MS: 5 CrossOver-CopyPart-ChangeBinInt-EraseBytes-InsertRepeatedBytes- 00:07:24.498 [2024-05-16 20:06:11.432881] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:24.498 [2024-05-16 20:06:11.432922] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:24.498 NEW_FUNC[1/4]: 0x48aaa0 in write_complete /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:353 00:07:24.498 NEW_FUNC[2/4]: 0x48b9e0 in read_complete /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:324 00:07:24.498 #28 NEW cov: 10925 ft: 13354 corp: 3/19b lim: 9 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 ChangeByte- 00:07:24.498 [2024-05-16 20:06:11.605271] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:24.498 [2024-05-16 20:06:11.605298] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:24.757 #34 NEW cov: 10925 ft: 13862 corp: 4/28b lim: 9 exec/s: 0 rss: 73Mb L: 9/9 MS: 1 CopyPart- 00:07:24.757 [2024-05-16 20:06:11.775671] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:24.757 [2024-05-16 20:06:11.775698] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:24.757 NEW_FUNC[1/1]: 0x1a3b490 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:24.757 #40 NEW cov: 10942 ft: 15085 corp: 5/37b lim: 9 exec/s: 0 rss: 74Mb L: 9/9 MS: 1 ShuffleBytes- 00:07:25.016 [2024-05-16 20:06:11.950019] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:25.016 [2024-05-16 20:06:11.950046] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:25.016 #46 NEW cov: 10942 ft: 15177 corp: 6/46b lim: 9 exec/s: 0 rss: 74Mb L: 9/9 MS: 1 ShuffleBytes- 00:07:25.016 [2024-05-16 20:06:12.126484] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:25.016 [2024-05-16 20:06:12.126511] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:25.274 #47 NEW cov: 10942 ft: 15780 corp: 7/55b lim: 9 exec/s: 47 rss: 74Mb L: 9/9 MS: 1 ChangeBinInt- 00:07:25.274 [2024-05-16 20:06:12.301506] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:25.274 [2024-05-16 20:06:12.301533] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:25.274 #48 NEW cov: 10942 ft: 15928 corp: 8/64b lim: 9 exec/s: 48 rss: 74Mb L: 9/9 MS: 1 ChangeBit- 00:07:25.533 [2024-05-16 20:06:12.472824] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:25.533 [2024-05-16 20:06:12.472850] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:25.533 #49 NEW cov: 10942 ft: 15956 corp: 9/73b lim: 9 exec/s: 49 rss: 74Mb L: 9/9 MS: 1 ChangeBit- 00:07:25.533 [2024-05-16 20:06:12.647309] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:25.533 [2024-05-16 20:06:12.647335] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:25.792 #50 NEW cov: 10942 ft: 16350 corp: 10/82b lim: 9 exec/s: 50 rss: 74Mb L: 9/9 MS: 1 CopyPart- 00:07:25.792 [2024-05-16 20:06:12.819875] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:25.792 [2024-05-16 20:06:12.819900] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:25.792 #51 NEW cov: 10949 ft: 16406 corp: 11/91b lim: 9 exec/s: 51 rss: 74Mb L: 9/9 MS: 1 ChangeBit- 00:07:26.051 [2024-05-16 20:06:12.992439] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:26.051 [2024-05-16 20:06:12.992471] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:26.051 #52 NEW cov: 10949 ft: 16453 corp: 12/100b lim: 9 exec/s: 26 rss: 74Mb L: 9/9 MS: 1 ChangeBinInt- 00:07:26.051 #52 DONE cov: 10949 ft: 16453 corp: 12/100b lim: 9 exec/s: 26 rss: 74Mb 00:07:26.051 Done 52 runs in 2 second(s) 00:07:26.051 [2024-05-16 20:06:13.111650] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: disabling controller 00:07:26.051 [2024-05-16 20:06:13.161819] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:26.310 20:06:13 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-6 /var/tmp/suppress_vfio_fuzz 00:07:26.310 20:06:13 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:26.310 20:06:13 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:26.310 20:06:13 llvm_fuzz.vfio_fuzz -- vfio/run.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:07:26.310 00:07:26.310 real 0m19.377s 00:07:26.310 user 0m28.453s 00:07:26.310 sys 0m1.587s 00:07:26.310 20:06:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:26.310 20:06:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:26.310 ************************************ 00:07:26.310 END TEST vfio_fuzz 00:07:26.310 ************************************ 00:07:26.310 20:06:13 llvm_fuzz -- fuzz/llvm.sh@67 -- # [[ 1 -eq 0 ]] 00:07:26.310 00:07:26.310 real 1m23.244s 00:07:26.310 user 2m13.970s 00:07:26.310 sys 0m7.851s 00:07:26.310 20:06:13 llvm_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:26.310 20:06:13 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:26.310 ************************************ 00:07:26.310 END TEST llvm_fuzz 00:07:26.310 ************************************ 00:07:26.310 20:06:13 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:07:26.310 20:06:13 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:07:26.310 20:06:13 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:07:26.310 20:06:13 -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:26.310 20:06:13 -- common/autotest_common.sh@10 -- # set +x 00:07:26.310 20:06:13 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:07:26.310 20:06:13 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:07:26.310 20:06:13 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:07:26.310 20:06:13 -- common/autotest_common.sh@10 -- # set +x 00:07:31.584 INFO: APP EXITING 00:07:31.584 INFO: killing all VMs 00:07:31.584 INFO: killing vhost app 00:07:31.584 INFO: EXIT DONE 00:07:33.487 Waiting for block devices as requested 00:07:33.487 0000:5f:00.0 (8086 0a54): vfio-pci -> nvme 00:07:33.747 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:07:33.747 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:07:33.747 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:07:34.006 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:07:34.006 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:07:34.006 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:07:34.006 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:07:34.266 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:07:34.266 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:07:34.266 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:07:34.266 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:07:34.525 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:07:34.525 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:07:34.525 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:07:34.785 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:07:34.785 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:07:34.785 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:07:35.044 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:07:37.581 Cleaning 00:07:37.581 Removing: /dev/shm/spdk_tgt_trace.pid1644823 00:07:37.581 Removing: /var/run/dpdk/spdk_pid1640796 00:07:37.581 Removing: /var/run/dpdk/spdk_pid1642985 00:07:37.581 Removing: /var/run/dpdk/spdk_pid1644823 00:07:37.581 Removing: /var/run/dpdk/spdk_pid1645290 00:07:37.581 Removing: /var/run/dpdk/spdk_pid1646151 00:07:37.581 Removing: /var/run/dpdk/spdk_pid1646370 00:07:37.581 Removing: /var/run/dpdk/spdk_pid1647312 00:07:37.581 Removing: /var/run/dpdk/spdk_pid1647519 00:07:37.581 Removing: /var/run/dpdk/spdk_pid1647879 00:07:37.582 Removing: /var/run/dpdk/spdk_pid1648151 00:07:37.582 Removing: /var/run/dpdk/spdk_pid1648422 00:07:37.582 Removing: /var/run/dpdk/spdk_pid1648715 00:07:37.582 Removing: /var/run/dpdk/spdk_pid1648993 00:07:37.582 Removing: /var/run/dpdk/spdk_pid1649240 00:07:37.582 Removing: /var/run/dpdk/spdk_pid1649480 00:07:37.582 Removing: /var/run/dpdk/spdk_pid1649749 00:07:37.582 Removing: /var/run/dpdk/spdk_pid1650505 00:07:37.582 Removing: /var/run/dpdk/spdk_pid1653345 00:07:37.582 Removing: /var/run/dpdk/spdk_pid1653591 00:07:37.582 Removing: /var/run/dpdk/spdk_pid1653647 00:07:37.582 Removing: /var/run/dpdk/spdk_pid1653842 00:07:37.582 Removing: /var/run/dpdk/spdk_pid1654113 00:07:37.582 Removing: /var/run/dpdk/spdk_pid1654333 00:07:37.582 Removing: /var/run/dpdk/spdk_pid1654807 00:07:37.582 Removing: /var/run/dpdk/spdk_pid1654819 00:07:37.582 Removing: /var/run/dpdk/spdk_pid1655105 00:07:37.582 Removing: /var/run/dpdk/spdk_pid1655295 00:07:37.582 Removing: /var/run/dpdk/spdk_pid1655541 00:07:37.582 Removing: /var/run/dpdk/spdk_pid1655547 00:07:37.582 Removing: /var/run/dpdk/spdk_pid1656078 00:07:37.582 Removing: /var/run/dpdk/spdk_pid1656324 00:07:37.582 Removing: /var/run/dpdk/spdk_pid1656562 00:07:37.582 Removing: /var/run/dpdk/spdk_pid1656636 00:07:37.582 Removing: /var/run/dpdk/spdk_pid1656893 00:07:37.582 Removing: /var/run/dpdk/spdk_pid1657121 00:07:37.582 Removing: /var/run/dpdk/spdk_pid1657188 00:07:37.582 Removing: /var/run/dpdk/spdk_pid1657430 00:07:37.582 Removing: /var/run/dpdk/spdk_pid1657670 00:07:37.582 Removing: /var/run/dpdk/spdk_pid1657905 00:07:37.582 Removing: /var/run/dpdk/spdk_pid1658148 00:07:37.582 Removing: /var/run/dpdk/spdk_pid1658385 00:07:37.582 Removing: /var/run/dpdk/spdk_pid1658629 00:07:37.582 Removing: /var/run/dpdk/spdk_pid1658871 00:07:37.582 Removing: /var/run/dpdk/spdk_pid1659111 00:07:37.582 Removing: /var/run/dpdk/spdk_pid1659355 00:07:37.582 Removing: /var/run/dpdk/spdk_pid1659592 00:07:37.582 Removing: /var/run/dpdk/spdk_pid1659827 00:07:37.582 Removing: /var/run/dpdk/spdk_pid1660077 00:07:37.582 Removing: /var/run/dpdk/spdk_pid1660313 00:07:37.582 Removing: /var/run/dpdk/spdk_pid1660550 00:07:37.582 Removing: /var/run/dpdk/spdk_pid1660794 00:07:37.582 Removing: /var/run/dpdk/spdk_pid1661029 00:07:37.582 Removing: /var/run/dpdk/spdk_pid1661275 00:07:37.582 Removing: /var/run/dpdk/spdk_pid1661519 00:07:37.582 Removing: /var/run/dpdk/spdk_pid1661767 00:07:37.582 Removing: /var/run/dpdk/spdk_pid1662079 00:07:37.582 Removing: /var/run/dpdk/spdk_pid1662270 00:07:37.582 Removing: /var/run/dpdk/spdk_pid1662538 00:07:37.582 Removing: /var/run/dpdk/spdk_pid1662994 00:07:37.582 Removing: /var/run/dpdk/spdk_pid1663433 00:07:37.582 Removing: /var/run/dpdk/spdk_pid1663885 00:07:37.582 Removing: /var/run/dpdk/spdk_pid1664327 00:07:37.582 Removing: /var/run/dpdk/spdk_pid1664761 00:07:37.582 Removing: /var/run/dpdk/spdk_pid1665061 00:07:37.582 Removing: /var/run/dpdk/spdk_pid1665466 00:07:37.582 Removing: /var/run/dpdk/spdk_pid1665907 00:07:37.582 Removing: /var/run/dpdk/spdk_pid1666355 00:07:37.582 Removing: /var/run/dpdk/spdk_pid1666796 00:07:37.582 Removing: /var/run/dpdk/spdk_pid1667245 00:07:37.582 Removing: /var/run/dpdk/spdk_pid1667578 00:07:37.582 Removing: /var/run/dpdk/spdk_pid1667940 00:07:37.841 Removing: /var/run/dpdk/spdk_pid1668384 00:07:37.841 Removing: /var/run/dpdk/spdk_pid1668829 00:07:37.841 Removing: /var/run/dpdk/spdk_pid1669270 00:07:37.841 Removing: /var/run/dpdk/spdk_pid1669715 00:07:37.841 Removing: /var/run/dpdk/spdk_pid1670106 00:07:37.841 Removing: /var/run/dpdk/spdk_pid1670454 00:07:37.841 Removing: /var/run/dpdk/spdk_pid1670854 00:07:37.841 Removing: /var/run/dpdk/spdk_pid1671295 00:07:37.841 Removing: /var/run/dpdk/spdk_pid1671740 00:07:37.841 Removing: /var/run/dpdk/spdk_pid1672184 00:07:37.841 Removing: /var/run/dpdk/spdk_pid1672599 00:07:37.841 Removing: /var/run/dpdk/spdk_pid1672907 00:07:37.841 Removing: /var/run/dpdk/spdk_pid1673440 00:07:37.841 Removing: /var/run/dpdk/spdk_pid1673916 00:07:37.841 Removing: /var/run/dpdk/spdk_pid1674310 00:07:37.841 Removing: /var/run/dpdk/spdk_pid1674912 00:07:37.841 Removing: /var/run/dpdk/spdk_pid1675355 00:07:37.841 Removing: /var/run/dpdk/spdk_pid1676181 00:07:37.841 Removing: /var/run/dpdk/spdk_pid1676602 00:07:37.841 Clean 00:07:37.841 20:06:24 -- common/autotest_common.sh@1447 -- # return 0 00:07:37.841 20:06:24 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:07:37.841 20:06:24 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:37.841 20:06:24 -- common/autotest_common.sh@10 -- # set +x 00:07:37.841 20:06:24 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:07:37.841 20:06:24 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:37.841 20:06:24 -- common/autotest_common.sh@10 -- # set +x 00:07:37.841 20:06:24 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:07:37.841 20:06:24 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log ]] 00:07:37.841 20:06:24 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log 00:07:37.841 20:06:24 -- spdk/autotest.sh@391 -- # hash lcov 00:07:37.841 20:06:24 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=clang == *\c\l\a\n\g* ]] 00:07:37.842 20:06:24 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:07:37.842 20:06:24 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:07:37.842 20:06:24 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:37.842 20:06:24 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:37.842 20:06:24 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.842 20:06:24 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.842 20:06:24 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.842 20:06:24 -- paths/export.sh@5 -- $ export PATH 00:07:37.842 20:06:24 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.842 20:06:24 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:07:38.101 20:06:24 -- common/autobuild_common.sh@437 -- $ date +%s 00:07:38.101 20:06:24 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715882784.XXXXXX 00:07:38.101 20:06:24 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715882784.rI1d0S 00:07:38.101 20:06:24 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:07:38.101 20:06:24 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:07:38.101 20:06:24 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:07:38.101 20:06:24 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:07:38.101 20:06:24 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:07:38.101 20:06:24 -- common/autobuild_common.sh@453 -- $ get_config_params 00:07:38.101 20:06:24 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:07:38.101 20:06:24 -- common/autotest_common.sh@10 -- $ set +x 00:07:38.101 20:06:25 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:07:38.101 20:06:25 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:07:38.101 20:06:25 -- pm/common@17 -- $ local monitor 00:07:38.101 20:06:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:38.101 20:06:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:38.101 20:06:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:38.101 20:06:25 -- pm/common@21 -- $ date +%s 00:07:38.101 20:06:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:38.101 20:06:25 -- pm/common@21 -- $ date +%s 00:07:38.101 20:06:25 -- pm/common@25 -- $ sleep 1 00:07:38.101 20:06:25 -- pm/common@21 -- $ date +%s 00:07:38.101 20:06:25 -- pm/common@21 -- $ date +%s 00:07:38.101 20:06:25 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715882785 00:07:38.101 20:06:25 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715882785 00:07:38.101 20:06:25 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715882785 00:07:38.101 20:06:25 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715882785 00:07:38.101 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715882785_collect-vmstat.pm.log 00:07:38.101 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715882785_collect-cpu-load.pm.log 00:07:38.101 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715882785_collect-cpu-temp.pm.log 00:07:38.101 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715882785_collect-bmc-pm.bmc.pm.log 00:07:39.038 20:06:26 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:07:39.038 20:06:26 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j88 00:07:39.038 20:06:26 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:39.038 20:06:26 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:07:39.038 20:06:26 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:07:39.038 20:06:26 -- spdk/autopackage.sh@19 -- $ timing_finish 00:07:39.038 20:06:26 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:07:39.038 20:06:26 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:07:39.038 20:06:26 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:07:39.038 20:06:26 -- spdk/autopackage.sh@20 -- $ exit 0 00:07:39.038 20:06:26 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:07:39.038 20:06:26 -- pm/common@29 -- $ signal_monitor_resources TERM 00:07:39.038 20:06:26 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:07:39.038 20:06:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:39.038 20:06:26 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:07:39.038 20:06:26 -- pm/common@44 -- $ pid=1682976 00:07:39.038 20:06:26 -- pm/common@50 -- $ kill -TERM 1682976 00:07:39.038 20:06:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:39.038 20:06:26 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:07:39.038 20:06:26 -- pm/common@44 -- $ pid=1682979 00:07:39.038 20:06:26 -- pm/common@50 -- $ kill -TERM 1682979 00:07:39.038 20:06:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:39.038 20:06:26 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:07:39.038 20:06:26 -- pm/common@44 -- $ pid=1682982 00:07:39.038 20:06:26 -- pm/common@50 -- $ kill -TERM 1682982 00:07:39.038 20:06:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:39.038 20:06:26 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:07:39.038 20:06:26 -- pm/common@44 -- $ pid=1683025 00:07:39.038 20:06:26 -- pm/common@50 -- $ sudo -E kill -TERM 1683025 00:07:39.038 + [[ -n 1540017 ]] 00:07:39.038 + sudo kill 1540017 00:07:39.048 [Pipeline] } 00:07:39.080 [Pipeline] // stage 00:07:39.086 [Pipeline] } 00:07:39.103 [Pipeline] // timeout 00:07:39.109 [Pipeline] } 00:07:39.127 [Pipeline] // catchError 00:07:39.135 [Pipeline] } 00:07:39.157 [Pipeline] // wrap 00:07:39.165 [Pipeline] } 00:07:39.181 [Pipeline] // catchError 00:07:39.190 [Pipeline] stage 00:07:39.193 [Pipeline] { (Epilogue) 00:07:39.210 [Pipeline] catchError 00:07:39.212 [Pipeline] { 00:07:39.226 [Pipeline] echo 00:07:39.227 Cleanup processes 00:07:39.233 [Pipeline] sh 00:07:39.518 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:39.518 1683176 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/sdr.cache 00:07:39.518 1683873 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:39.532 [Pipeline] sh 00:07:39.913 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:39.913 ++ grep -v 'sudo pgrep' 00:07:39.913 ++ awk '{print $1}' 00:07:39.913 + sudo kill -9 1683176 00:07:39.926 [Pipeline] sh 00:07:40.211 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:07:41.161 [Pipeline] sh 00:07:41.446 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:07:41.446 Artifacts sizes are good 00:07:41.461 [Pipeline] archiveArtifacts 00:07:41.469 Archiving artifacts 00:07:41.529 [Pipeline] sh 00:07:41.817 + sudo chown -R sys_sgci /var/jenkins/workspace/short-fuzz-phy-autotest 00:07:41.832 [Pipeline] cleanWs 00:07:41.842 [WS-CLEANUP] Deleting project workspace... 00:07:41.842 [WS-CLEANUP] Deferred wipeout is used... 00:07:41.849 [WS-CLEANUP] done 00:07:41.851 [Pipeline] } 00:07:41.870 [Pipeline] // catchError 00:07:41.882 [Pipeline] sh 00:07:42.164 + logger -p user.info -t JENKINS-CI 00:07:42.173 [Pipeline] } 00:07:42.190 [Pipeline] // stage 00:07:42.196 [Pipeline] } 00:07:42.214 [Pipeline] // node 00:07:42.220 [Pipeline] End of Pipeline 00:07:42.255 Finished: SUCCESS