00:00:00.001 Started by upstream project "autotest-per-patch" build number 126203 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.011 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.012 The recommended git tool is: git 00:00:00.012 using credential 00000000-0000-0000-0000-000000000002 00:00:00.013 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.028 Fetching changes from the remote Git repository 00:00:00.030 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.042 Using shallow fetch with depth 1 00:00:00.042 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.042 > git --version # timeout=10 00:00:00.062 > git --version # 'git version 2.39.2' 00:00:00.062 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.087 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.087 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.458 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.470 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.482 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:03.482 > git config core.sparsecheckout # timeout=10 00:00:03.493 > git read-tree -mu HEAD # timeout=10 00:00:03.512 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:03.531 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:03.532 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:03.623 [Pipeline] Start of Pipeline 00:00:03.635 [Pipeline] library 00:00:03.636 Loading library shm_lib@master 00:00:03.636 Library shm_lib@master is cached. Copying from home. 00:00:03.649 [Pipeline] node 00:00:03.660 Running on WFP39 in /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:03.661 [Pipeline] { 00:00:03.671 [Pipeline] catchError 00:00:03.673 [Pipeline] { 00:00:03.684 [Pipeline] wrap 00:00:03.693 [Pipeline] { 00:00:03.699 [Pipeline] stage 00:00:03.701 [Pipeline] { (Prologue) 00:00:03.999 [Pipeline] sh 00:00:04.278 + logger -p user.info -t JENKINS-CI 00:00:04.293 [Pipeline] echo 00:00:04.294 Node: WFP39 00:00:04.300 [Pipeline] sh 00:00:04.592 [Pipeline] setCustomBuildProperty 00:00:04.603 [Pipeline] echo 00:00:04.604 Cleanup processes 00:00:04.607 [Pipeline] sh 00:00:04.884 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:04.884 1390191 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:04.896 [Pipeline] sh 00:00:05.172 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:05.173 ++ grep -v 'sudo pgrep' 00:00:05.173 ++ awk '{print $1}' 00:00:05.173 + sudo kill -9 00:00:05.173 + true 00:00:05.185 [Pipeline] cleanWs 00:00:05.192 [WS-CLEANUP] Deleting project workspace... 00:00:05.192 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.199 [WS-CLEANUP] done 00:00:05.202 [Pipeline] setCustomBuildProperty 00:00:05.213 [Pipeline] sh 00:00:05.489 + sudo git config --global --replace-all safe.directory '*' 00:00:05.549 [Pipeline] httpRequest 00:00:05.570 [Pipeline] echo 00:00:05.571 Sorcerer 10.211.164.101 is alive 00:00:05.577 [Pipeline] httpRequest 00:00:05.581 HttpMethod: GET 00:00:05.582 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:05.582 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:05.584 Response Code: HTTP/1.1 200 OK 00:00:05.585 Success: Status code 200 is in the accepted range: 200,404 00:00:05.585 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:06.331 [Pipeline] sh 00:00:06.612 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:06.628 [Pipeline] httpRequest 00:00:06.648 [Pipeline] echo 00:00:06.650 Sorcerer 10.211.164.101 is alive 00:00:06.660 [Pipeline] httpRequest 00:00:06.664 HttpMethod: GET 00:00:06.665 URL: http://10.211.164.101/packages/spdk_24034319f896c8e60e2c132b73fea4b3881bc812.tar.gz 00:00:06.665 Sending request to url: http://10.211.164.101/packages/spdk_24034319f896c8e60e2c132b73fea4b3881bc812.tar.gz 00:00:06.667 Response Code: HTTP/1.1 200 OK 00:00:06.667 Success: Status code 200 is in the accepted range: 200,404 00:00:06.668 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk_24034319f896c8e60e2c132b73fea4b3881bc812.tar.gz 00:00:26.223 [Pipeline] sh 00:00:26.511 + tar --no-same-owner -xf spdk_24034319f896c8e60e2c132b73fea4b3881bc812.tar.gz 00:00:29.058 [Pipeline] sh 00:00:29.341 + git -C spdk log --oneline -n5 00:00:29.341 24034319f nvmf/tcp: use sock group polling for the listening sockets 00:00:29.341 245333351 nvmf/tcp: add transport field to the spdk_nvmf_tcp_port struct 00:00:29.341 bdeef1ed3 nvmf: add helper function to get a transport poll group 00:00:29.341 2728651ee accel: adjust task per ch define name 00:00:29.341 e7cce062d Examples/Perf: correct the calculation of total bandwidth 00:00:29.353 [Pipeline] } 00:00:29.369 [Pipeline] // stage 00:00:29.377 [Pipeline] stage 00:00:29.378 [Pipeline] { (Prepare) 00:00:29.393 [Pipeline] writeFile 00:00:29.409 [Pipeline] sh 00:00:29.685 + logger -p user.info -t JENKINS-CI 00:00:29.698 [Pipeline] sh 00:00:29.981 + logger -p user.info -t JENKINS-CI 00:00:29.993 [Pipeline] sh 00:00:30.277 + cat autorun-spdk.conf 00:00:30.277 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:30.277 SPDK_TEST_FUZZER_SHORT=1 00:00:30.277 SPDK_TEST_FUZZER=1 00:00:30.277 SPDK_RUN_UBSAN=1 00:00:30.285 RUN_NIGHTLY=0 00:00:30.289 [Pipeline] readFile 00:00:30.319 [Pipeline] withEnv 00:00:30.321 [Pipeline] { 00:00:30.336 [Pipeline] sh 00:00:30.622 + set -ex 00:00:30.622 + [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf ]] 00:00:30.622 + source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:00:30.622 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:30.622 ++ SPDK_TEST_FUZZER_SHORT=1 00:00:30.622 ++ SPDK_TEST_FUZZER=1 00:00:30.622 ++ SPDK_RUN_UBSAN=1 00:00:30.622 ++ RUN_NIGHTLY=0 00:00:30.622 + case $SPDK_TEST_NVMF_NICS in 00:00:30.622 + DRIVERS= 00:00:30.622 + [[ -n '' ]] 00:00:30.622 + exit 0 00:00:30.632 [Pipeline] } 00:00:30.649 [Pipeline] // withEnv 00:00:30.653 [Pipeline] } 00:00:30.665 [Pipeline] // stage 00:00:30.671 [Pipeline] catchError 00:00:30.673 [Pipeline] { 00:00:30.682 [Pipeline] timeout 00:00:30.682 Timeout set to expire in 30 min 00:00:30.684 [Pipeline] { 00:00:30.694 [Pipeline] stage 00:00:30.695 [Pipeline] { (Tests) 00:00:30.705 [Pipeline] sh 00:00:30.982 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:30.982 ++ readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:30.982 + DIR_ROOT=/var/jenkins/workspace/short-fuzz-phy-autotest 00:00:30.982 + [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest ]] 00:00:30.982 + DIR_SPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:30.982 + DIR_OUTPUT=/var/jenkins/workspace/short-fuzz-phy-autotest/output 00:00:30.982 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk ]] 00:00:30.982 + [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:00:30.982 + mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/output 00:00:30.982 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:00:30.982 + [[ short-fuzz-phy-autotest == pkgdep-* ]] 00:00:30.982 + cd /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:30.982 + source /etc/os-release 00:00:30.982 ++ NAME='Fedora Linux' 00:00:30.982 ++ VERSION='38 (Cloud Edition)' 00:00:30.982 ++ ID=fedora 00:00:30.982 ++ VERSION_ID=38 00:00:30.982 ++ VERSION_CODENAME= 00:00:30.982 ++ PLATFORM_ID=platform:f38 00:00:30.982 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:30.982 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:30.982 ++ LOGO=fedora-logo-icon 00:00:30.982 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:30.983 ++ HOME_URL=https://fedoraproject.org/ 00:00:30.983 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:30.983 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:30.983 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:30.983 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:30.983 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:30.983 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:30.983 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:30.983 ++ SUPPORT_END=2024-05-14 00:00:30.983 ++ VARIANT='Cloud Edition' 00:00:30.983 ++ VARIANT_ID=cloud 00:00:30.983 + uname -a 00:00:30.983 Linux spdk-wfp-39 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 02:47:10 UTC 2024 x86_64 GNU/Linux 00:00:30.983 + sudo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:00:34.272 Hugepages 00:00:34.272 node hugesize free / total 00:00:34.272 node0 1048576kB 0 / 0 00:00:34.272 node0 2048kB 0 / 0 00:00:34.272 node1 1048576kB 0 / 0 00:00:34.272 node1 2048kB 0 / 0 00:00:34.272 00:00:34.272 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:34.272 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:34.272 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:34.272 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:34.272 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:34.272 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:34.272 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:34.273 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:34.273 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:34.273 NVMe 0000:1a:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:00:34.273 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:34.273 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:34.273 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:34.273 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:34.273 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:34.273 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:34.273 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:34.273 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:34.273 + rm -f /tmp/spdk-ld-path 00:00:34.273 + source autorun-spdk.conf 00:00:34.273 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:34.273 ++ SPDK_TEST_FUZZER_SHORT=1 00:00:34.273 ++ SPDK_TEST_FUZZER=1 00:00:34.273 ++ SPDK_RUN_UBSAN=1 00:00:34.273 ++ RUN_NIGHTLY=0 00:00:34.273 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:34.273 + [[ -n '' ]] 00:00:34.273 + sudo git config --global --add safe.directory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:34.273 + for M in /var/spdk/build-*-manifest.txt 00:00:34.273 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:34.273 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:00:34.273 + for M in /var/spdk/build-*-manifest.txt 00:00:34.273 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:34.273 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:00:34.273 ++ uname 00:00:34.273 + [[ Linux == \L\i\n\u\x ]] 00:00:34.273 + sudo dmesg -T 00:00:34.273 + sudo dmesg --clear 00:00:34.273 + dmesg_pid=1391131 00:00:34.273 + [[ Fedora Linux == FreeBSD ]] 00:00:34.273 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:34.273 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:34.273 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:34.273 + [[ -x /usr/src/fio-static/fio ]] 00:00:34.273 + export FIO_BIN=/usr/src/fio-static/fio 00:00:34.273 + FIO_BIN=/usr/src/fio-static/fio 00:00:34.273 + sudo dmesg -Tw 00:00:34.273 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\s\h\o\r\t\-\f\u\z\z\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:34.273 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:34.273 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:34.273 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:34.273 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:34.273 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:34.273 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:34.273 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:34.273 + spdk/autorun.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:00:34.273 Test configuration: 00:00:34.273 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:34.273 SPDK_TEST_FUZZER_SHORT=1 00:00:34.273 SPDK_TEST_FUZZER=1 00:00:34.273 SPDK_RUN_UBSAN=1 00:00:34.273 RUN_NIGHTLY=0 16:11:19 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:00:34.273 16:11:19 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:34.273 16:11:19 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:34.273 16:11:19 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:34.273 16:11:19 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:34.273 16:11:19 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:34.273 16:11:19 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:34.273 16:11:19 -- paths/export.sh@5 -- $ export PATH 00:00:34.273 16:11:19 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:34.273 16:11:19 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:00:34.273 16:11:19 -- common/autobuild_common.sh@444 -- $ date +%s 00:00:34.273 16:11:19 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721052679.XXXXXX 00:00:34.273 16:11:19 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721052679.FaeO1o 00:00:34.273 16:11:19 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:00:34.273 16:11:19 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:00:34.273 16:11:19 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:00:34.273 16:11:19 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:34.273 16:11:19 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:34.273 16:11:19 -- common/autobuild_common.sh@460 -- $ get_config_params 00:00:34.273 16:11:19 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:00:34.273 16:11:19 -- common/autotest_common.sh@10 -- $ set +x 00:00:34.273 16:11:19 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:34.273 16:11:19 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:00:34.273 16:11:19 -- pm/common@17 -- $ local monitor 00:00:34.273 16:11:19 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:34.273 16:11:19 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:34.273 16:11:19 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:34.273 16:11:19 -- pm/common@21 -- $ date +%s 00:00:34.273 16:11:19 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:34.273 16:11:19 -- pm/common@21 -- $ date +%s 00:00:34.273 16:11:19 -- pm/common@25 -- $ sleep 1 00:00:34.273 16:11:19 -- pm/common@21 -- $ date +%s 00:00:34.273 16:11:19 -- pm/common@21 -- $ date +%s 00:00:34.273 16:11:19 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721052679 00:00:34.273 16:11:19 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721052679 00:00:34.273 16:11:19 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721052679 00:00:34.273 16:11:19 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721052679 00:00:34.273 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721052679_collect-vmstat.pm.log 00:00:34.273 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721052679_collect-cpu-temp.pm.log 00:00:34.273 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721052679_collect-cpu-load.pm.log 00:00:34.273 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721052679_collect-bmc-pm.bmc.pm.log 00:00:35.207 16:11:20 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:00:35.207 16:11:20 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:35.207 16:11:20 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:35.207 16:11:20 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:35.207 16:11:20 -- spdk/autobuild.sh@16 -- $ date -u 00:00:35.207 Mon Jul 15 02:11:20 PM UTC 2024 00:00:35.207 16:11:20 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:35.207 v24.09-pre-209-g24034319f 00:00:35.207 16:11:20 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:35.207 16:11:20 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:35.207 16:11:20 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:35.207 16:11:20 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:00:35.207 16:11:20 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:35.207 16:11:20 -- common/autotest_common.sh@10 -- $ set +x 00:00:35.466 ************************************ 00:00:35.466 START TEST ubsan 00:00:35.466 ************************************ 00:00:35.466 16:11:20 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:00:35.466 using ubsan 00:00:35.466 00:00:35.466 real 0m0.000s 00:00:35.466 user 0m0.000s 00:00:35.466 sys 0m0.000s 00:00:35.466 16:11:20 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:00:35.466 16:11:20 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:35.466 ************************************ 00:00:35.466 END TEST ubsan 00:00:35.466 ************************************ 00:00:35.466 16:11:20 -- common/autotest_common.sh@1142 -- $ return 0 00:00:35.466 16:11:20 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:35.466 16:11:20 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:35.466 16:11:20 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:35.466 16:11:20 -- spdk/autobuild.sh@51 -- $ [[ 1 -eq 1 ]] 00:00:35.466 16:11:20 -- spdk/autobuild.sh@52 -- $ llvm_precompile 00:00:35.466 16:11:20 -- common/autobuild_common.sh@432 -- $ run_test autobuild_llvm_precompile _llvm_precompile 00:00:35.466 16:11:20 -- common/autotest_common.sh@1099 -- $ '[' 2 -le 1 ']' 00:00:35.466 16:11:20 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:35.466 16:11:20 -- common/autotest_common.sh@10 -- $ set +x 00:00:35.466 ************************************ 00:00:35.466 START TEST autobuild_llvm_precompile 00:00:35.466 ************************************ 00:00:35.466 16:11:20 autobuild_llvm_precompile -- common/autotest_common.sh@1123 -- $ _llvm_precompile 00:00:35.466 16:11:20 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ clang --version 00:00:35.466 16:11:20 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ [[ clang version 16.0.6 (Fedora 16.0.6-3.fc38) 00:00:35.466 Target: x86_64-redhat-linux-gnu 00:00:35.466 Thread model: posix 00:00:35.466 InstalledDir: /usr/bin =~ version (([0-9]+).([0-9]+).([0-9]+)) ]] 00:00:35.466 16:11:20 autobuild_llvm_precompile -- common/autobuild_common.sh@33 -- $ clang_num=16 00:00:35.466 16:11:20 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ export CC=clang-16 00:00:35.466 16:11:20 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ CC=clang-16 00:00:35.466 16:11:20 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ export CXX=clang++-16 00:00:35.466 16:11:20 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ CXX=clang++-16 00:00:35.466 16:11:20 autobuild_llvm_precompile -- common/autobuild_common.sh@38 -- $ fuzzer_libs=(/usr/lib*/clang/@("$clang_num"|"$clang_version")/lib/*linux*/libclang_rt.fuzzer_no_main?(-x86_64).a) 00:00:35.466 16:11:20 autobuild_llvm_precompile -- common/autobuild_common.sh@39 -- $ fuzzer_lib=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:00:35.466 16:11:20 autobuild_llvm_precompile -- common/autobuild_common.sh@40 -- $ [[ -e /usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a ]] 00:00:35.466 16:11:20 autobuild_llvm_precompile -- common/autobuild_common.sh@42 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a' 00:00:35.466 16:11:20 autobuild_llvm_precompile -- common/autobuild_common.sh@44 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:00:35.725 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:00:35.725 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:00:36.292 Using 'verbs' RDMA provider 00:00:52.134 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:04.333 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:04.333 Creating mk/config.mk...done. 00:01:04.333 Creating mk/cc.flags.mk...done. 00:01:04.333 Type 'make' to build. 00:01:04.333 00:01:04.333 real 0m28.496s 00:01:04.333 user 0m12.582s 00:01:04.333 sys 0m15.209s 00:01:04.333 16:11:49 autobuild_llvm_precompile -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:04.333 16:11:49 autobuild_llvm_precompile -- common/autotest_common.sh@10 -- $ set +x 00:01:04.333 ************************************ 00:01:04.333 END TEST autobuild_llvm_precompile 00:01:04.333 ************************************ 00:01:04.333 16:11:49 -- common/autotest_common.sh@1142 -- $ return 0 00:01:04.333 16:11:49 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:04.333 16:11:49 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:04.333 16:11:49 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:04.334 16:11:49 -- spdk/autobuild.sh@62 -- $ [[ 1 -eq 1 ]] 00:01:04.334 16:11:49 -- spdk/autobuild.sh@64 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:01:04.334 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:01:04.334 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:04.619 Using 'verbs' RDMA provider 00:01:18.204 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:30.414 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:30.414 Creating mk/config.mk...done. 00:01:30.414 Creating mk/cc.flags.mk...done. 00:01:30.414 Type 'make' to build. 00:01:30.414 16:12:14 -- spdk/autobuild.sh@69 -- $ run_test make make -j72 00:01:30.414 16:12:14 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:30.414 16:12:14 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:30.414 16:12:14 -- common/autotest_common.sh@10 -- $ set +x 00:01:30.414 ************************************ 00:01:30.414 START TEST make 00:01:30.415 ************************************ 00:01:30.415 16:12:14 make -- common/autotest_common.sh@1123 -- $ make -j72 00:01:30.415 make[1]: Nothing to be done for 'all'. 00:01:31.350 The Meson build system 00:01:31.350 Version: 1.3.1 00:01:31.350 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user 00:01:31.350 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:31.350 Build type: native build 00:01:31.350 Project name: libvfio-user 00:01:31.350 Project version: 0.0.1 00:01:31.350 C compiler for the host machine: clang-16 (clang 16.0.6 "clang version 16.0.6 (Fedora 16.0.6-3.fc38)") 00:01:31.350 C linker for the host machine: clang-16 ld.bfd 2.39-16 00:01:31.350 Host machine cpu family: x86_64 00:01:31.350 Host machine cpu: x86_64 00:01:31.351 Run-time dependency threads found: YES 00:01:31.351 Library dl found: YES 00:01:31.351 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:31.351 Run-time dependency json-c found: YES 0.17 00:01:31.351 Run-time dependency cmocka found: YES 1.1.7 00:01:31.351 Program pytest-3 found: NO 00:01:31.351 Program flake8 found: NO 00:01:31.351 Program misspell-fixer found: NO 00:01:31.351 Program restructuredtext-lint found: NO 00:01:31.351 Program valgrind found: YES (/usr/bin/valgrind) 00:01:31.351 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:31.351 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:31.351 Compiler for C supports arguments -Wwrite-strings: YES 00:01:31.351 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:31.351 Program test-lspci.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:31.351 Program test-linkage.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:31.351 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:31.351 Build targets in project: 8 00:01:31.351 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:31.351 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:31.351 00:01:31.351 libvfio-user 0.0.1 00:01:31.351 00:01:31.351 User defined options 00:01:31.351 buildtype : debug 00:01:31.351 default_library: static 00:01:31.351 libdir : /usr/local/lib 00:01:31.351 00:01:31.351 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:31.608 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:31.608 [1/36] Compiling C object samples/lspci.p/lspci.c.o 00:01:31.608 [2/36] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:31.608 [3/36] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:31.609 [4/36] Compiling C object samples/null.p/null.c.o 00:01:31.609 [5/36] Compiling C object lib/libvfio-user.a.p/irq.c.o 00:01:31.609 [6/36] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:31.609 [7/36] Compiling C object lib/libvfio-user.a.p/tran.c.o 00:01:31.609 [8/36] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:31.609 [9/36] Compiling C object lib/libvfio-user.a.p/migration.c.o 00:01:31.609 [10/36] Compiling C object lib/libvfio-user.a.p/pci.c.o 00:01:31.609 [11/36] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:31.609 [12/36] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:31.609 [13/36] Compiling C object lib/libvfio-user.a.p/dma.c.o 00:01:31.609 [14/36] Compiling C object test/unit_tests.p/mocks.c.o 00:01:31.609 [15/36] Compiling C object lib/libvfio-user.a.p/pci_caps.c.o 00:01:31.609 [16/36] Compiling C object lib/libvfio-user.a.p/tran_sock.c.o 00:01:31.609 [17/36] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:31.609 [18/36] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:31.609 [19/36] Compiling C object samples/server.p/server.c.o 00:01:31.609 [20/36] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:31.609 [21/36] Compiling C object samples/client.p/client.c.o 00:01:31.868 [22/36] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:31.868 [23/36] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:31.868 [24/36] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:31.868 [25/36] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:31.868 [26/36] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:31.868 [27/36] Compiling C object lib/libvfio-user.a.p/libvfio-user.c.o 00:01:31.868 [28/36] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:31.868 [29/36] Linking static target lib/libvfio-user.a 00:01:31.868 [30/36] Linking target samples/client 00:01:31.868 [31/36] Linking target test/unit_tests 00:01:31.868 [32/36] Linking target samples/gpio-pci-idio-16 00:01:31.868 [33/36] Linking target samples/shadow_ioeventfd_server 00:01:31.868 [34/36] Linking target samples/server 00:01:31.868 [35/36] Linking target samples/lspci 00:01:31.868 [36/36] Linking target samples/null 00:01:31.868 INFO: autodetecting backend as ninja 00:01:31.868 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:31.868 DESTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:32.126 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:32.126 ninja: no work to do. 00:01:37.405 The Meson build system 00:01:37.405 Version: 1.3.1 00:01:37.405 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk 00:01:37.405 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp 00:01:37.405 Build type: native build 00:01:37.405 Program cat found: YES (/usr/bin/cat) 00:01:37.405 Project name: DPDK 00:01:37.405 Project version: 24.03.0 00:01:37.405 C compiler for the host machine: clang-16 (clang 16.0.6 "clang version 16.0.6 (Fedora 16.0.6-3.fc38)") 00:01:37.405 C linker for the host machine: clang-16 ld.bfd 2.39-16 00:01:37.405 Host machine cpu family: x86_64 00:01:37.405 Host machine cpu: x86_64 00:01:37.405 Message: ## Building in Developer Mode ## 00:01:37.405 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:37.405 Program check-symbols.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:37.405 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:37.405 Program python3 found: YES (/usr/bin/python3) 00:01:37.405 Program cat found: YES (/usr/bin/cat) 00:01:37.405 Compiler for C supports arguments -march=native: YES 00:01:37.405 Checking for size of "void *" : 8 00:01:37.405 Checking for size of "void *" : 8 (cached) 00:01:37.405 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:37.405 Library m found: YES 00:01:37.405 Library numa found: YES 00:01:37.405 Has header "numaif.h" : YES 00:01:37.405 Library fdt found: NO 00:01:37.405 Library execinfo found: NO 00:01:37.405 Has header "execinfo.h" : YES 00:01:37.405 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:37.405 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:37.405 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:37.405 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:37.405 Run-time dependency openssl found: YES 3.0.9 00:01:37.405 Run-time dependency libpcap found: YES 1.10.4 00:01:37.405 Has header "pcap.h" with dependency libpcap: YES 00:01:37.405 Compiler for C supports arguments -Wcast-qual: YES 00:01:37.405 Compiler for C supports arguments -Wdeprecated: YES 00:01:37.405 Compiler for C supports arguments -Wformat: YES 00:01:37.405 Compiler for C supports arguments -Wformat-nonliteral: YES 00:01:37.405 Compiler for C supports arguments -Wformat-security: YES 00:01:37.405 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:37.405 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:37.405 Compiler for C supports arguments -Wnested-externs: YES 00:01:37.405 Compiler for C supports arguments -Wold-style-definition: YES 00:01:37.405 Compiler for C supports arguments -Wpointer-arith: YES 00:01:37.405 Compiler for C supports arguments -Wsign-compare: YES 00:01:37.405 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:37.405 Compiler for C supports arguments -Wundef: YES 00:01:37.405 Compiler for C supports arguments -Wwrite-strings: YES 00:01:37.405 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:37.405 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:01:37.405 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:37.405 Program objdump found: YES (/usr/bin/objdump) 00:01:37.405 Compiler for C supports arguments -mavx512f: YES 00:01:37.405 Checking if "AVX512 checking" compiles: YES 00:01:37.405 Fetching value of define "__SSE4_2__" : 1 00:01:37.405 Fetching value of define "__AES__" : 1 00:01:37.405 Fetching value of define "__AVX__" : 1 00:01:37.405 Fetching value of define "__AVX2__" : 1 00:01:37.405 Fetching value of define "__AVX512BW__" : 1 00:01:37.405 Fetching value of define "__AVX512CD__" : 1 00:01:37.405 Fetching value of define "__AVX512DQ__" : 1 00:01:37.405 Fetching value of define "__AVX512F__" : 1 00:01:37.405 Fetching value of define "__AVX512VL__" : 1 00:01:37.405 Fetching value of define "__PCLMUL__" : 1 00:01:37.405 Fetching value of define "__RDRND__" : 1 00:01:37.405 Fetching value of define "__RDSEED__" : 1 00:01:37.405 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:37.405 Fetching value of define "__znver1__" : (undefined) 00:01:37.405 Fetching value of define "__znver2__" : (undefined) 00:01:37.405 Fetching value of define "__znver3__" : (undefined) 00:01:37.405 Fetching value of define "__znver4__" : (undefined) 00:01:37.405 Compiler for C supports arguments -Wno-format-truncation: NO 00:01:37.405 Message: lib/log: Defining dependency "log" 00:01:37.405 Message: lib/kvargs: Defining dependency "kvargs" 00:01:37.405 Message: lib/telemetry: Defining dependency "telemetry" 00:01:37.405 Checking for function "getentropy" : NO 00:01:37.405 Message: lib/eal: Defining dependency "eal" 00:01:37.405 Message: lib/ring: Defining dependency "ring" 00:01:37.405 Message: lib/rcu: Defining dependency "rcu" 00:01:37.405 Message: lib/mempool: Defining dependency "mempool" 00:01:37.405 Message: lib/mbuf: Defining dependency "mbuf" 00:01:37.405 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:37.405 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:37.405 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:37.405 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:37.405 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:37.405 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:37.405 Compiler for C supports arguments -mpclmul: YES 00:01:37.405 Compiler for C supports arguments -maes: YES 00:01:37.405 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:37.405 Compiler for C supports arguments -mavx512bw: YES 00:01:37.405 Compiler for C supports arguments -mavx512dq: YES 00:01:37.405 Compiler for C supports arguments -mavx512vl: YES 00:01:37.405 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:37.405 Compiler for C supports arguments -mavx2: YES 00:01:37.405 Compiler for C supports arguments -mavx: YES 00:01:37.405 Message: lib/net: Defining dependency "net" 00:01:37.405 Message: lib/meter: Defining dependency "meter" 00:01:37.405 Message: lib/ethdev: Defining dependency "ethdev" 00:01:37.405 Message: lib/pci: Defining dependency "pci" 00:01:37.405 Message: lib/cmdline: Defining dependency "cmdline" 00:01:37.405 Message: lib/hash: Defining dependency "hash" 00:01:37.405 Message: lib/timer: Defining dependency "timer" 00:01:37.405 Message: lib/compressdev: Defining dependency "compressdev" 00:01:37.405 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:37.405 Message: lib/dmadev: Defining dependency "dmadev" 00:01:37.405 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:37.405 Message: lib/power: Defining dependency "power" 00:01:37.405 Message: lib/reorder: Defining dependency "reorder" 00:01:37.405 Message: lib/security: Defining dependency "security" 00:01:37.405 Has header "linux/userfaultfd.h" : YES 00:01:37.405 Has header "linux/vduse.h" : YES 00:01:37.405 Message: lib/vhost: Defining dependency "vhost" 00:01:37.405 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:01:37.405 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:37.405 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:37.405 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:37.405 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:37.405 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:37.405 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:37.405 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:37.405 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:37.406 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:37.406 Program doxygen found: YES (/usr/bin/doxygen) 00:01:37.406 Configuring doxy-api-html.conf using configuration 00:01:37.406 Configuring doxy-api-man.conf using configuration 00:01:37.406 Program mandb found: YES (/usr/bin/mandb) 00:01:37.406 Program sphinx-build found: NO 00:01:37.406 Configuring rte_build_config.h using configuration 00:01:37.406 Message: 00:01:37.406 ================= 00:01:37.406 Applications Enabled 00:01:37.406 ================= 00:01:37.406 00:01:37.406 apps: 00:01:37.406 00:01:37.406 00:01:37.406 Message: 00:01:37.406 ================= 00:01:37.406 Libraries Enabled 00:01:37.406 ================= 00:01:37.406 00:01:37.406 libs: 00:01:37.406 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:37.406 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:37.406 cryptodev, dmadev, power, reorder, security, vhost, 00:01:37.406 00:01:37.406 Message: 00:01:37.406 =============== 00:01:37.406 Drivers Enabled 00:01:37.406 =============== 00:01:37.406 00:01:37.406 common: 00:01:37.406 00:01:37.406 bus: 00:01:37.406 pci, vdev, 00:01:37.406 mempool: 00:01:37.406 ring, 00:01:37.406 dma: 00:01:37.406 00:01:37.406 net: 00:01:37.406 00:01:37.406 crypto: 00:01:37.406 00:01:37.406 compress: 00:01:37.406 00:01:37.406 vdpa: 00:01:37.406 00:01:37.406 00:01:37.406 Message: 00:01:37.406 ================= 00:01:37.406 Content Skipped 00:01:37.406 ================= 00:01:37.406 00:01:37.406 apps: 00:01:37.406 dumpcap: explicitly disabled via build config 00:01:37.406 graph: explicitly disabled via build config 00:01:37.406 pdump: explicitly disabled via build config 00:01:37.406 proc-info: explicitly disabled via build config 00:01:37.406 test-acl: explicitly disabled via build config 00:01:37.406 test-bbdev: explicitly disabled via build config 00:01:37.406 test-cmdline: explicitly disabled via build config 00:01:37.406 test-compress-perf: explicitly disabled via build config 00:01:37.406 test-crypto-perf: explicitly disabled via build config 00:01:37.406 test-dma-perf: explicitly disabled via build config 00:01:37.406 test-eventdev: explicitly disabled via build config 00:01:37.406 test-fib: explicitly disabled via build config 00:01:37.406 test-flow-perf: explicitly disabled via build config 00:01:37.406 test-gpudev: explicitly disabled via build config 00:01:37.406 test-mldev: explicitly disabled via build config 00:01:37.406 test-pipeline: explicitly disabled via build config 00:01:37.406 test-pmd: explicitly disabled via build config 00:01:37.406 test-regex: explicitly disabled via build config 00:01:37.406 test-sad: explicitly disabled via build config 00:01:37.406 test-security-perf: explicitly disabled via build config 00:01:37.406 00:01:37.406 libs: 00:01:37.406 argparse: explicitly disabled via build config 00:01:37.406 metrics: explicitly disabled via build config 00:01:37.406 acl: explicitly disabled via build config 00:01:37.406 bbdev: explicitly disabled via build config 00:01:37.406 bitratestats: explicitly disabled via build config 00:01:37.406 bpf: explicitly disabled via build config 00:01:37.406 cfgfile: explicitly disabled via build config 00:01:37.406 distributor: explicitly disabled via build config 00:01:37.406 efd: explicitly disabled via build config 00:01:37.406 eventdev: explicitly disabled via build config 00:01:37.406 dispatcher: explicitly disabled via build config 00:01:37.406 gpudev: explicitly disabled via build config 00:01:37.406 gro: explicitly disabled via build config 00:01:37.406 gso: explicitly disabled via build config 00:01:37.406 ip_frag: explicitly disabled via build config 00:01:37.406 jobstats: explicitly disabled via build config 00:01:37.406 latencystats: explicitly disabled via build config 00:01:37.406 lpm: explicitly disabled via build config 00:01:37.406 member: explicitly disabled via build config 00:01:37.406 pcapng: explicitly disabled via build config 00:01:37.406 rawdev: explicitly disabled via build config 00:01:37.406 regexdev: explicitly disabled via build config 00:01:37.406 mldev: explicitly disabled via build config 00:01:37.406 rib: explicitly disabled via build config 00:01:37.406 sched: explicitly disabled via build config 00:01:37.406 stack: explicitly disabled via build config 00:01:37.406 ipsec: explicitly disabled via build config 00:01:37.406 pdcp: explicitly disabled via build config 00:01:37.406 fib: explicitly disabled via build config 00:01:37.406 port: explicitly disabled via build config 00:01:37.406 pdump: explicitly disabled via build config 00:01:37.406 table: explicitly disabled via build config 00:01:37.406 pipeline: explicitly disabled via build config 00:01:37.406 graph: explicitly disabled via build config 00:01:37.406 node: explicitly disabled via build config 00:01:37.406 00:01:37.406 drivers: 00:01:37.406 common/cpt: not in enabled drivers build config 00:01:37.406 common/dpaax: not in enabled drivers build config 00:01:37.406 common/iavf: not in enabled drivers build config 00:01:37.406 common/idpf: not in enabled drivers build config 00:01:37.406 common/ionic: not in enabled drivers build config 00:01:37.406 common/mvep: not in enabled drivers build config 00:01:37.406 common/octeontx: not in enabled drivers build config 00:01:37.406 bus/auxiliary: not in enabled drivers build config 00:01:37.406 bus/cdx: not in enabled drivers build config 00:01:37.406 bus/dpaa: not in enabled drivers build config 00:01:37.406 bus/fslmc: not in enabled drivers build config 00:01:37.406 bus/ifpga: not in enabled drivers build config 00:01:37.406 bus/platform: not in enabled drivers build config 00:01:37.406 bus/uacce: not in enabled drivers build config 00:01:37.406 bus/vmbus: not in enabled drivers build config 00:01:37.406 common/cnxk: not in enabled drivers build config 00:01:37.406 common/mlx5: not in enabled drivers build config 00:01:37.406 common/nfp: not in enabled drivers build config 00:01:37.406 common/nitrox: not in enabled drivers build config 00:01:37.406 common/qat: not in enabled drivers build config 00:01:37.406 common/sfc_efx: not in enabled drivers build config 00:01:37.406 mempool/bucket: not in enabled drivers build config 00:01:37.406 mempool/cnxk: not in enabled drivers build config 00:01:37.406 mempool/dpaa: not in enabled drivers build config 00:01:37.406 mempool/dpaa2: not in enabled drivers build config 00:01:37.406 mempool/octeontx: not in enabled drivers build config 00:01:37.406 mempool/stack: not in enabled drivers build config 00:01:37.406 dma/cnxk: not in enabled drivers build config 00:01:37.406 dma/dpaa: not in enabled drivers build config 00:01:37.406 dma/dpaa2: not in enabled drivers build config 00:01:37.406 dma/hisilicon: not in enabled drivers build config 00:01:37.406 dma/idxd: not in enabled drivers build config 00:01:37.406 dma/ioat: not in enabled drivers build config 00:01:37.406 dma/skeleton: not in enabled drivers build config 00:01:37.406 net/af_packet: not in enabled drivers build config 00:01:37.406 net/af_xdp: not in enabled drivers build config 00:01:37.406 net/ark: not in enabled drivers build config 00:01:37.406 net/atlantic: not in enabled drivers build config 00:01:37.406 net/avp: not in enabled drivers build config 00:01:37.406 net/axgbe: not in enabled drivers build config 00:01:37.406 net/bnx2x: not in enabled drivers build config 00:01:37.406 net/bnxt: not in enabled drivers build config 00:01:37.406 net/bonding: not in enabled drivers build config 00:01:37.406 net/cnxk: not in enabled drivers build config 00:01:37.406 net/cpfl: not in enabled drivers build config 00:01:37.406 net/cxgbe: not in enabled drivers build config 00:01:37.406 net/dpaa: not in enabled drivers build config 00:01:37.406 net/dpaa2: not in enabled drivers build config 00:01:37.406 net/e1000: not in enabled drivers build config 00:01:37.406 net/ena: not in enabled drivers build config 00:01:37.406 net/enetc: not in enabled drivers build config 00:01:37.406 net/enetfec: not in enabled drivers build config 00:01:37.406 net/enic: not in enabled drivers build config 00:01:37.406 net/failsafe: not in enabled drivers build config 00:01:37.406 net/fm10k: not in enabled drivers build config 00:01:37.406 net/gve: not in enabled drivers build config 00:01:37.406 net/hinic: not in enabled drivers build config 00:01:37.406 net/hns3: not in enabled drivers build config 00:01:37.406 net/i40e: not in enabled drivers build config 00:01:37.406 net/iavf: not in enabled drivers build config 00:01:37.406 net/ice: not in enabled drivers build config 00:01:37.406 net/idpf: not in enabled drivers build config 00:01:37.406 net/igc: not in enabled drivers build config 00:01:37.406 net/ionic: not in enabled drivers build config 00:01:37.406 net/ipn3ke: not in enabled drivers build config 00:01:37.406 net/ixgbe: not in enabled drivers build config 00:01:37.406 net/mana: not in enabled drivers build config 00:01:37.406 net/memif: not in enabled drivers build config 00:01:37.406 net/mlx4: not in enabled drivers build config 00:01:37.406 net/mlx5: not in enabled drivers build config 00:01:37.406 net/mvneta: not in enabled drivers build config 00:01:37.406 net/mvpp2: not in enabled drivers build config 00:01:37.406 net/netvsc: not in enabled drivers build config 00:01:37.406 net/nfb: not in enabled drivers build config 00:01:37.406 net/nfp: not in enabled drivers build config 00:01:37.406 net/ngbe: not in enabled drivers build config 00:01:37.406 net/null: not in enabled drivers build config 00:01:37.406 net/octeontx: not in enabled drivers build config 00:01:37.406 net/octeon_ep: not in enabled drivers build config 00:01:37.406 net/pcap: not in enabled drivers build config 00:01:37.406 net/pfe: not in enabled drivers build config 00:01:37.406 net/qede: not in enabled drivers build config 00:01:37.406 net/ring: not in enabled drivers build config 00:01:37.406 net/sfc: not in enabled drivers build config 00:01:37.406 net/softnic: not in enabled drivers build config 00:01:37.406 net/tap: not in enabled drivers build config 00:01:37.406 net/thunderx: not in enabled drivers build config 00:01:37.406 net/txgbe: not in enabled drivers build config 00:01:37.406 net/vdev_netvsc: not in enabled drivers build config 00:01:37.406 net/vhost: not in enabled drivers build config 00:01:37.406 net/virtio: not in enabled drivers build config 00:01:37.406 net/vmxnet3: not in enabled drivers build config 00:01:37.406 raw/*: missing internal dependency, "rawdev" 00:01:37.406 crypto/armv8: not in enabled drivers build config 00:01:37.406 crypto/bcmfs: not in enabled drivers build config 00:01:37.406 crypto/caam_jr: not in enabled drivers build config 00:01:37.406 crypto/ccp: not in enabled drivers build config 00:01:37.406 crypto/cnxk: not in enabled drivers build config 00:01:37.406 crypto/dpaa_sec: not in enabled drivers build config 00:01:37.406 crypto/dpaa2_sec: not in enabled drivers build config 00:01:37.406 crypto/ipsec_mb: not in enabled drivers build config 00:01:37.406 crypto/mlx5: not in enabled drivers build config 00:01:37.406 crypto/mvsam: not in enabled drivers build config 00:01:37.406 crypto/nitrox: not in enabled drivers build config 00:01:37.406 crypto/null: not in enabled drivers build config 00:01:37.406 crypto/octeontx: not in enabled drivers build config 00:01:37.407 crypto/openssl: not in enabled drivers build config 00:01:37.407 crypto/scheduler: not in enabled drivers build config 00:01:37.407 crypto/uadk: not in enabled drivers build config 00:01:37.407 crypto/virtio: not in enabled drivers build config 00:01:37.407 compress/isal: not in enabled drivers build config 00:01:37.407 compress/mlx5: not in enabled drivers build config 00:01:37.407 compress/nitrox: not in enabled drivers build config 00:01:37.407 compress/octeontx: not in enabled drivers build config 00:01:37.407 compress/zlib: not in enabled drivers build config 00:01:37.407 regex/*: missing internal dependency, "regexdev" 00:01:37.407 ml/*: missing internal dependency, "mldev" 00:01:37.407 vdpa/ifc: not in enabled drivers build config 00:01:37.407 vdpa/mlx5: not in enabled drivers build config 00:01:37.407 vdpa/nfp: not in enabled drivers build config 00:01:37.407 vdpa/sfc: not in enabled drivers build config 00:01:37.407 event/*: missing internal dependency, "eventdev" 00:01:37.407 baseband/*: missing internal dependency, "bbdev" 00:01:37.407 gpu/*: missing internal dependency, "gpudev" 00:01:37.407 00:01:37.407 00:01:37.665 Build targets in project: 85 00:01:37.665 00:01:37.665 DPDK 24.03.0 00:01:37.665 00:01:37.665 User defined options 00:01:37.665 buildtype : debug 00:01:37.665 default_library : static 00:01:37.665 libdir : lib 00:01:37.665 prefix : /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:37.666 c_args : -fPIC -Werror 00:01:37.666 c_link_args : 00:01:37.666 cpu_instruction_set: native 00:01:37.666 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:37.666 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:01:37.666 enable_docs : false 00:01:37.666 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:37.666 enable_kmods : false 00:01:37.666 max_lcores : 128 00:01:37.666 tests : false 00:01:37.666 00:01:37.666 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:37.932 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp' 00:01:38.221 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:38.221 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:38.221 [3/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:38.221 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:38.221 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:38.221 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:38.221 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:38.221 [8/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:38.221 [9/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:38.221 [10/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:38.221 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:38.221 [12/268] Linking static target lib/librte_kvargs.a 00:01:38.221 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:38.221 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:38.221 [15/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:38.221 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:38.221 [17/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:38.221 [18/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:38.221 [19/268] Linking static target lib/librte_log.a 00:01:38.484 [20/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.484 [21/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:38.484 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:38.484 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:38.484 [24/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:38.484 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:38.484 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:38.484 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:38.484 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:38.484 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:38.484 [30/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:38.484 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:38.484 [32/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:38.484 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:38.484 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:38.484 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:38.484 [36/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:38.484 [37/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:38.484 [38/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:38.484 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:38.484 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:38.484 [41/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:38.484 [42/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:38.484 [43/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:38.484 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:38.484 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:38.484 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:38.484 [47/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:38.484 [48/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:38.744 [49/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:38.744 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:38.744 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:38.744 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:38.744 [53/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:38.744 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:38.744 [55/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:38.744 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:38.744 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:38.744 [58/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:38.744 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:38.744 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:38.744 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:38.744 [62/268] Linking static target lib/librte_telemetry.a 00:01:38.745 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:38.745 [64/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:38.745 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:38.745 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:38.745 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:38.745 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:38.745 [69/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:38.745 [70/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:38.745 [71/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:38.745 [72/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:38.745 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:38.745 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:38.745 [75/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:38.745 [76/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:38.745 [77/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:38.745 [78/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:38.745 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:38.745 [80/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:38.745 [81/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:38.745 [82/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:38.745 [83/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:38.745 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:38.745 [85/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:38.745 [86/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:38.745 [87/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:38.745 [88/268] Linking static target lib/librte_pci.a 00:01:38.745 [89/268] Linking static target lib/librte_ring.a 00:01:38.745 [90/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:38.745 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:38.745 [92/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:38.745 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:38.745 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:38.745 [95/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:38.745 [96/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:38.745 [97/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:38.745 [98/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:38.745 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:38.745 [100/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:38.745 [101/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:38.745 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:38.745 [103/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:38.745 [104/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:38.745 [105/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:38.745 [106/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:38.745 [107/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:38.745 [108/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.745 [109/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:38.745 [110/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:38.745 [111/268] Linking static target lib/librte_eal.a 00:01:38.745 [112/268] Linking static target lib/librte_mempool.a 00:01:38.745 [113/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:38.745 [114/268] Linking static target lib/librte_rcu.a 00:01:38.745 [115/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:39.003 [116/268] Linking target lib/librte_log.so.24.1 00:01:39.004 [117/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:39.004 [118/268] Linking static target lib/librte_mbuf.a 00:01:39.004 [119/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.004 [120/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:39.004 [121/268] Linking static target lib/librte_net.a 00:01:39.004 [122/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:39.004 [123/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.004 [124/268] Linking target lib/librte_kvargs.so.24.1 00:01:39.261 [125/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.261 [126/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:39.261 [127/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.261 [128/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:39.261 [129/268] Linking static target lib/librte_meter.a 00:01:39.261 [130/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:39.261 [131/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:39.261 [132/268] Linking target lib/librte_telemetry.so.24.1 00:01:39.261 [133/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:39.261 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:39.261 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:39.261 [136/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:39.261 [137/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:39.261 [138/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:39.261 [139/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:39.261 [140/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:39.261 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:39.261 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:39.261 [143/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:39.261 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:39.261 [145/268] Linking static target lib/librte_timer.a 00:01:39.261 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:39.261 [147/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:39.261 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:39.261 [149/268] Linking static target lib/librte_cmdline.a 00:01:39.261 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:39.261 [151/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:39.261 [152/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:39.261 [153/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:39.261 [154/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:39.261 [155/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:39.261 [156/268] Linking static target lib/librte_dmadev.a 00:01:39.261 [157/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:39.261 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:39.261 [159/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:39.261 [160/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:39.261 [161/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:39.261 [162/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.261 [163/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:39.261 [164/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:39.261 [165/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:39.261 [166/268] Linking static target lib/librte_compressdev.a 00:01:39.261 [167/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:39.261 [168/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:39.261 [169/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:39.261 [170/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:39.261 [171/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:39.261 [172/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:39.518 [173/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:39.518 [174/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:39.518 [175/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:39.518 [176/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:39.518 [177/268] Linking static target lib/librte_reorder.a 00:01:39.518 [178/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:39.518 [179/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:39.518 [180/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:39.518 [181/268] Linking static target lib/librte_security.a 00:01:39.518 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:39.518 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:39.518 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:39.518 [185/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:39.518 [186/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:39.518 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:39.518 [188/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.518 [189/268] Linking static target lib/librte_hash.a 00:01:39.518 [190/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:39.518 [191/268] Linking static target lib/librte_power.a 00:01:39.518 [192/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:39.519 [193/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:39.519 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:39.519 [195/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:39.519 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:39.519 [197/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.519 [198/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.519 [199/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:39.519 [200/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:39.519 [201/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:39.519 [202/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:39.519 [203/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:39.519 [204/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:39.519 [205/268] Linking static target lib/librte_cryptodev.a 00:01:39.519 [206/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:39.777 [207/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:39.777 [208/268] Linking static target drivers/librte_bus_vdev.a 00:01:39.777 [209/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:39.777 [210/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:39.777 [211/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:39.777 [212/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:39.777 [213/268] Linking static target drivers/librte_bus_pci.a 00:01:39.777 [214/268] Linking static target drivers/librte_mempool_ring.a 00:01:39.777 [215/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.777 [216/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:39.777 [217/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.035 [218/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.035 [219/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:40.035 [220/268] Linking static target lib/librte_ethdev.a 00:01:40.035 [221/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.035 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.035 [223/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.602 [224/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.602 [225/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.602 [226/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:40.602 [227/268] Linking static target lib/librte_vhost.a 00:01:40.602 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.602 [229/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.980 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.914 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.476 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.379 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.379 [234/268] Linking target lib/librte_eal.so.24.1 00:01:51.637 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:51.637 [236/268] Linking target lib/librte_pci.so.24.1 00:01:51.637 [237/268] Linking target lib/librte_dmadev.so.24.1 00:01:51.637 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:51.637 [239/268] Linking target lib/librte_timer.so.24.1 00:01:51.637 [240/268] Linking target lib/librte_ring.so.24.1 00:01:51.637 [241/268] Linking target lib/librte_meter.so.24.1 00:01:51.637 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:51.637 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:51.895 [244/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:51.895 [245/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:51.895 [246/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:51.895 [247/268] Linking target lib/librte_mempool.so.24.1 00:01:51.895 [248/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:51.895 [249/268] Linking target lib/librte_rcu.so.24.1 00:01:51.895 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:51.895 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:51.895 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:51.895 [253/268] Linking target lib/librte_mbuf.so.24.1 00:01:52.153 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:52.153 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:01:52.153 [256/268] Linking target lib/librte_compressdev.so.24.1 00:01:52.153 [257/268] Linking target lib/librte_net.so.24.1 00:01:52.153 [258/268] Linking target lib/librte_reorder.so.24.1 00:01:52.412 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:52.412 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:52.412 [261/268] Linking target lib/librte_security.so.24.1 00:01:52.412 [262/268] Linking target lib/librte_cmdline.so.24.1 00:01:52.412 [263/268] Linking target lib/librte_hash.so.24.1 00:01:52.412 [264/268] Linking target lib/librte_ethdev.so.24.1 00:01:52.669 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:52.669 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:52.669 [267/268] Linking target lib/librte_power.so.24.1 00:01:52.669 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:52.669 INFO: autodetecting backend as ninja 00:01:52.669 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp -j 72 00:01:53.600 CC lib/ut/ut.o 00:01:53.600 CC lib/log/log_deprecated.o 00:01:53.600 CC lib/log/log_flags.o 00:01:53.600 CC lib/log/log.o 00:01:53.600 CC lib/ut_mock/mock.o 00:01:53.858 LIB libspdk_ut.a 00:01:53.858 LIB libspdk_log.a 00:01:53.858 LIB libspdk_ut_mock.a 00:01:54.115 CC lib/util/cpuset.o 00:01:54.115 CC lib/util/base64.o 00:01:54.115 CC lib/util/bit_array.o 00:01:54.115 CC lib/util/crc16.o 00:01:54.115 CC lib/util/crc32.o 00:01:54.115 CC lib/util/crc32c.o 00:01:54.115 CC lib/util/dif.o 00:01:54.115 CC lib/util/crc32_ieee.o 00:01:54.115 CC lib/util/file.o 00:01:54.115 CC lib/util/fd.o 00:01:54.115 CC lib/util/crc64.o 00:01:54.115 CC lib/util/hexlify.o 00:01:54.115 CC lib/ioat/ioat.o 00:01:54.115 CC lib/util/iov.o 00:01:54.115 CC lib/util/math.o 00:01:54.115 CC lib/util/pipe.o 00:01:54.115 CC lib/util/strerror_tls.o 00:01:54.115 CC lib/util/string.o 00:01:54.115 CC lib/util/uuid.o 00:01:54.115 CC lib/util/xor.o 00:01:54.115 CC lib/dma/dma.o 00:01:54.115 CC lib/util/fd_group.o 00:01:54.115 CC lib/util/zipf.o 00:01:54.115 CXX lib/trace_parser/trace.o 00:01:54.115 CC lib/vfio_user/host/vfio_user_pci.o 00:01:54.115 CC lib/vfio_user/host/vfio_user.o 00:01:54.373 LIB libspdk_dma.a 00:01:54.373 LIB libspdk_ioat.a 00:01:54.373 LIB libspdk_vfio_user.a 00:01:54.373 LIB libspdk_util.a 00:01:54.631 LIB libspdk_trace_parser.a 00:01:54.631 CC lib/idxd/idxd.o 00:01:54.631 CC lib/idxd/idxd_user.o 00:01:54.631 CC lib/idxd/idxd_kernel.o 00:01:54.631 CC lib/rdma_utils/rdma_utils.o 00:01:54.631 CC lib/conf/conf.o 00:01:54.631 CC lib/json/json_parse.o 00:01:54.631 CC lib/json/json_util.o 00:01:54.631 CC lib/vmd/vmd.o 00:01:54.631 CC lib/json/json_write.o 00:01:54.631 CC lib/vmd/led.o 00:01:54.632 CC lib/env_dpdk/env.o 00:01:54.632 CC lib/rdma_provider/common.o 00:01:54.632 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:54.632 CC lib/env_dpdk/memory.o 00:01:54.632 CC lib/env_dpdk/pci.o 00:01:54.632 CC lib/env_dpdk/threads.o 00:01:54.632 CC lib/env_dpdk/init.o 00:01:54.890 CC lib/env_dpdk/pci_virtio.o 00:01:54.890 CC lib/env_dpdk/pci_ioat.o 00:01:54.890 CC lib/env_dpdk/pci_vmd.o 00:01:54.890 CC lib/env_dpdk/pci_idxd.o 00:01:54.890 CC lib/env_dpdk/pci_event.o 00:01:54.890 CC lib/env_dpdk/sigbus_handler.o 00:01:54.890 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:54.890 CC lib/env_dpdk/pci_dpdk.o 00:01:54.890 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:54.890 LIB libspdk_rdma_provider.a 00:01:54.890 LIB libspdk_conf.a 00:01:54.890 LIB libspdk_rdma_utils.a 00:01:54.890 LIB libspdk_json.a 00:01:55.148 LIB libspdk_idxd.a 00:01:55.148 LIB libspdk_vmd.a 00:01:55.148 CC lib/jsonrpc/jsonrpc_server.o 00:01:55.148 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:55.148 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:55.148 CC lib/jsonrpc/jsonrpc_client.o 00:01:55.406 LIB libspdk_jsonrpc.a 00:01:55.663 LIB libspdk_env_dpdk.a 00:01:55.663 CC lib/rpc/rpc.o 00:01:55.920 LIB libspdk_rpc.a 00:01:56.178 CC lib/notify/notify.o 00:01:56.178 CC lib/notify/notify_rpc.o 00:01:56.178 CC lib/trace/trace_rpc.o 00:01:56.178 CC lib/trace/trace.o 00:01:56.178 CC lib/trace/trace_flags.o 00:01:56.178 CC lib/keyring/keyring_rpc.o 00:01:56.178 CC lib/keyring/keyring.o 00:01:56.435 LIB libspdk_notify.a 00:01:56.435 LIB libspdk_keyring.a 00:01:56.435 LIB libspdk_trace.a 00:01:56.693 CC lib/sock/sock.o 00:01:56.693 CC lib/sock/sock_rpc.o 00:01:56.693 CC lib/thread/thread.o 00:01:56.693 CC lib/thread/iobuf.o 00:01:56.951 LIB libspdk_sock.a 00:01:57.208 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:57.208 CC lib/nvme/nvme_ctrlr.o 00:01:57.208 CC lib/nvme/nvme_fabric.o 00:01:57.208 CC lib/nvme/nvme_ns_cmd.o 00:01:57.208 CC lib/nvme/nvme_pcie.o 00:01:57.208 CC lib/nvme/nvme_ns.o 00:01:57.208 CC lib/nvme/nvme_pcie_common.o 00:01:57.208 CC lib/nvme/nvme_qpair.o 00:01:57.208 CC lib/nvme/nvme.o 00:01:57.208 CC lib/nvme/nvme_quirks.o 00:01:57.208 CC lib/nvme/nvme_transport.o 00:01:57.208 CC lib/nvme/nvme_discovery.o 00:01:57.208 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:57.208 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:57.208 CC lib/nvme/nvme_tcp.o 00:01:57.208 CC lib/nvme/nvme_poll_group.o 00:01:57.208 CC lib/nvme/nvme_opal.o 00:01:57.208 CC lib/nvme/nvme_io_msg.o 00:01:57.208 CC lib/nvme/nvme_zns.o 00:01:57.208 CC lib/nvme/nvme_auth.o 00:01:57.208 CC lib/nvme/nvme_stubs.o 00:01:57.208 CC lib/nvme/nvme_cuse.o 00:01:57.208 CC lib/nvme/nvme_rdma.o 00:01:57.208 CC lib/nvme/nvme_vfio_user.o 00:01:57.466 LIB libspdk_thread.a 00:01:57.723 CC lib/init/subsystem.o 00:01:57.723 CC lib/init/json_config.o 00:01:57.723 CC lib/init/rpc.o 00:01:57.723 CC lib/init/subsystem_rpc.o 00:01:57.723 CC lib/accel/accel.o 00:01:57.723 CC lib/accel/accel_sw.o 00:01:57.723 CC lib/accel/accel_rpc.o 00:01:57.723 CC lib/blob/blobstore.o 00:01:57.723 CC lib/blob/request.o 00:01:57.723 CC lib/blob/zeroes.o 00:01:57.723 CC lib/blob/blob_bs_dev.o 00:01:57.723 CC lib/vfu_tgt/tgt_endpoint.o 00:01:57.723 CC lib/vfu_tgt/tgt_rpc.o 00:01:57.723 CC lib/virtio/virtio_vhost_user.o 00:01:57.723 CC lib/virtio/virtio.o 00:01:57.723 CC lib/virtio/virtio_vfio_user.o 00:01:57.723 CC lib/virtio/virtio_pci.o 00:01:57.980 LIB libspdk_init.a 00:01:57.980 LIB libspdk_virtio.a 00:01:57.980 LIB libspdk_vfu_tgt.a 00:01:58.236 CC lib/event/reactor.o 00:01:58.236 CC lib/event/log_rpc.o 00:01:58.236 CC lib/event/app.o 00:01:58.236 CC lib/event/scheduler_static.o 00:01:58.236 CC lib/event/app_rpc.o 00:01:58.493 LIB libspdk_event.a 00:01:58.493 LIB libspdk_accel.a 00:01:58.751 LIB libspdk_nvme.a 00:01:58.751 CC lib/bdev/bdev_zone.o 00:01:58.751 CC lib/bdev/bdev.o 00:01:58.751 CC lib/bdev/bdev_rpc.o 00:01:58.751 CC lib/bdev/scsi_nvme.o 00:01:58.751 CC lib/bdev/part.o 00:01:59.684 LIB libspdk_blob.a 00:01:59.942 CC lib/lvol/lvol.o 00:01:59.942 CC lib/blobfs/tree.o 00:01:59.942 CC lib/blobfs/blobfs.o 00:02:00.509 LIB libspdk_lvol.a 00:02:00.509 LIB libspdk_blobfs.a 00:02:00.509 LIB libspdk_bdev.a 00:02:00.773 CC lib/nbd/nbd.o 00:02:00.773 CC lib/nbd/nbd_rpc.o 00:02:00.773 CC lib/ftl/ftl_init.o 00:02:00.773 CC lib/ftl/ftl_layout.o 00:02:00.773 CC lib/ftl/ftl_core.o 00:02:00.773 CC lib/ftl/ftl_io.o 00:02:00.773 CC lib/ftl/ftl_debug.o 00:02:00.773 CC lib/ftl/ftl_sb.o 00:02:00.773 CC lib/ftl/ftl_l2p.o 00:02:00.773 CC lib/ftl/ftl_l2p_flat.o 00:02:00.773 CC lib/ftl/ftl_nv_cache.o 00:02:00.773 CC lib/ftl/ftl_band_ops.o 00:02:00.773 CC lib/ftl/ftl_band.o 00:02:00.773 CC lib/ftl/ftl_rq.o 00:02:00.773 CC lib/nvmf/ctrlr.o 00:02:00.773 CC lib/ftl/ftl_writer.o 00:02:00.773 CC lib/ftl/ftl_reloc.o 00:02:00.773 CC lib/nvmf/ctrlr_bdev.o 00:02:00.773 CC lib/nvmf/ctrlr_discovery.o 00:02:00.773 CC lib/nvmf/subsystem.o 00:02:00.773 CC lib/nvmf/nvmf_rpc.o 00:02:00.773 CC lib/ftl/ftl_l2p_cache.o 00:02:00.773 CC lib/ftl/ftl_p2l.o 00:02:00.773 CC lib/nvmf/nvmf.o 00:02:00.773 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:00.773 CC lib/ftl/mngt/ftl_mngt.o 00:02:00.773 CC lib/nvmf/transport.o 00:02:00.773 CC lib/nvmf/tcp.o 00:02:00.773 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:00.773 CC lib/nvmf/mdns_server.o 00:02:00.773 CC lib/scsi/dev.o 00:02:00.773 CC lib/nvmf/stubs.o 00:02:00.773 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:00.773 CC lib/nvmf/rdma.o 00:02:00.773 CC lib/nvmf/auth.o 00:02:00.773 CC lib/scsi/port.o 00:02:00.773 CC lib/nvmf/vfio_user.o 00:02:00.773 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:00.773 CC lib/scsi/lun.o 00:02:00.773 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:00.773 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:00.773 CC lib/scsi/scsi_bdev.o 00:02:00.773 CC lib/scsi/scsi.o 00:02:00.773 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:00.773 CC lib/scsi/scsi_pr.o 00:02:00.773 CC lib/scsi/scsi_rpc.o 00:02:00.773 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:00.773 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:00.773 CC lib/scsi/task.o 00:02:00.773 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:00.773 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:00.773 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:00.773 CC lib/ftl/utils/ftl_conf.o 00:02:00.773 CC lib/ftl/utils/ftl_md.o 00:02:00.773 CC lib/ublk/ublk.o 00:02:00.773 CC lib/ublk/ublk_rpc.o 00:02:00.773 CC lib/ftl/utils/ftl_mempool.o 00:02:00.773 CC lib/ftl/utils/ftl_bitmap.o 00:02:00.773 CC lib/ftl/utils/ftl_property.o 00:02:00.773 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:00.773 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:00.773 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:00.773 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:00.773 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:00.773 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:00.773 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:00.773 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:00.773 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:00.773 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:00.773 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:00.773 CC lib/ftl/base/ftl_base_dev.o 00:02:00.773 CC lib/ftl/base/ftl_base_bdev.o 00:02:01.031 CC lib/ftl/ftl_trace.o 00:02:01.289 LIB libspdk_nbd.a 00:02:01.289 LIB libspdk_scsi.a 00:02:01.548 LIB libspdk_ublk.a 00:02:01.548 CC lib/vhost/vhost_rpc.o 00:02:01.548 CC lib/vhost/vhost.o 00:02:01.548 CC lib/vhost/vhost_blk.o 00:02:01.548 CC lib/vhost/vhost_scsi.o 00:02:01.548 CC lib/vhost/rte_vhost_user.o 00:02:01.548 CC lib/iscsi/init_grp.o 00:02:01.548 CC lib/iscsi/conn.o 00:02:01.548 CC lib/iscsi/iscsi.o 00:02:01.548 CC lib/iscsi/md5.o 00:02:01.548 CC lib/iscsi/param.o 00:02:01.548 CC lib/iscsi/portal_grp.o 00:02:01.548 CC lib/iscsi/tgt_node.o 00:02:01.549 CC lib/iscsi/iscsi_subsystem.o 00:02:01.549 CC lib/iscsi/iscsi_rpc.o 00:02:01.549 CC lib/iscsi/task.o 00:02:01.549 LIB libspdk_ftl.a 00:02:02.124 LIB libspdk_nvmf.a 00:02:02.383 LIB libspdk_vhost.a 00:02:02.383 LIB libspdk_iscsi.a 00:02:02.949 CC module/env_dpdk/env_dpdk_rpc.o 00:02:02.949 CC module/vfu_device/vfu_virtio_blk.o 00:02:02.949 CC module/vfu_device/vfu_virtio_scsi.o 00:02:02.949 CC module/vfu_device/vfu_virtio.o 00:02:02.949 CC module/vfu_device/vfu_virtio_rpc.o 00:02:02.949 CC module/blob/bdev/blob_bdev.o 00:02:02.949 CC module/keyring/file/keyring_rpc.o 00:02:02.949 CC module/keyring/file/keyring.o 00:02:02.949 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:02.949 CC module/keyring/linux/keyring.o 00:02:02.949 CC module/keyring/linux/keyring_rpc.o 00:02:02.949 CC module/accel/iaa/accel_iaa.o 00:02:02.949 CC module/accel/iaa/accel_iaa_rpc.o 00:02:02.949 CC module/accel/dsa/accel_dsa_rpc.o 00:02:02.949 CC module/accel/dsa/accel_dsa.o 00:02:02.949 LIB libspdk_env_dpdk_rpc.a 00:02:02.949 CC module/accel/error/accel_error.o 00:02:02.949 CC module/accel/error/accel_error_rpc.o 00:02:02.949 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:02.949 CC module/sock/posix/posix.o 00:02:02.949 CC module/accel/ioat/accel_ioat.o 00:02:02.949 CC module/accel/ioat/accel_ioat_rpc.o 00:02:02.949 CC module/scheduler/gscheduler/gscheduler.o 00:02:03.208 LIB libspdk_scheduler_dpdk_governor.a 00:02:03.208 LIB libspdk_keyring_linux.a 00:02:03.208 LIB libspdk_keyring_file.a 00:02:03.208 LIB libspdk_accel_error.a 00:02:03.208 LIB libspdk_scheduler_gscheduler.a 00:02:03.208 LIB libspdk_scheduler_dynamic.a 00:02:03.208 LIB libspdk_accel_iaa.a 00:02:03.208 LIB libspdk_blob_bdev.a 00:02:03.208 LIB libspdk_accel_ioat.a 00:02:03.208 LIB libspdk_accel_dsa.a 00:02:03.466 LIB libspdk_vfu_device.a 00:02:03.466 LIB libspdk_sock_posix.a 00:02:03.725 CC module/bdev/delay/vbdev_delay.o 00:02:03.725 CC module/bdev/malloc/bdev_malloc.o 00:02:03.725 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:03.725 CC module/bdev/null/bdev_null.o 00:02:03.725 CC module/bdev/null/bdev_null_rpc.o 00:02:03.725 CC module/bdev/gpt/gpt.o 00:02:03.725 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:03.725 CC module/bdev/gpt/vbdev_gpt.o 00:02:03.725 CC module/bdev/passthru/vbdev_passthru.o 00:02:03.725 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:03.725 CC module/bdev/split/vbdev_split.o 00:02:03.725 CC module/bdev/aio/bdev_aio.o 00:02:03.725 CC module/bdev/aio/bdev_aio_rpc.o 00:02:03.725 CC module/bdev/split/vbdev_split_rpc.o 00:02:03.725 CC module/bdev/error/vbdev_error.o 00:02:03.725 CC module/bdev/error/vbdev_error_rpc.o 00:02:03.725 CC module/bdev/iscsi/bdev_iscsi.o 00:02:03.725 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:03.725 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:03.725 CC module/blobfs/bdev/blobfs_bdev.o 00:02:03.725 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:03.725 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:03.725 CC module/bdev/nvme/bdev_nvme.o 00:02:03.725 CC module/bdev/nvme/nvme_rpc.o 00:02:03.725 CC module/bdev/ftl/bdev_ftl.o 00:02:03.725 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:03.725 CC module/bdev/nvme/bdev_mdns_client.o 00:02:03.725 CC module/bdev/raid/bdev_raid.o 00:02:03.725 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:03.725 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:03.725 CC module/bdev/lvol/vbdev_lvol.o 00:02:03.725 CC module/bdev/nvme/vbdev_opal.o 00:02:03.725 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:03.725 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:03.725 CC module/bdev/raid/bdev_raid_rpc.o 00:02:03.725 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:03.725 CC module/bdev/raid/raid0.o 00:02:03.725 CC module/bdev/raid/raid1.o 00:02:03.725 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:03.725 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:03.725 CC module/bdev/raid/bdev_raid_sb.o 00:02:03.725 CC module/bdev/raid/concat.o 00:02:03.725 LIB libspdk_blobfs_bdev.a 00:02:03.725 LIB libspdk_bdev_null.a 00:02:03.725 LIB libspdk_bdev_split.a 00:02:03.726 LIB libspdk_bdev_error.a 00:02:03.726 LIB libspdk_bdev_passthru.a 00:02:03.984 LIB libspdk_bdev_zone_block.a 00:02:03.984 LIB libspdk_bdev_delay.a 00:02:03.984 LIB libspdk_bdev_iscsi.a 00:02:03.984 LIB libspdk_bdev_malloc.a 00:02:03.984 LIB libspdk_bdev_aio.a 00:02:03.984 LIB libspdk_bdev_gpt.a 00:02:03.984 LIB libspdk_bdev_ftl.a 00:02:03.984 LIB libspdk_bdev_lvol.a 00:02:04.244 LIB libspdk_bdev_virtio.a 00:02:04.244 LIB libspdk_bdev_raid.a 00:02:05.181 LIB libspdk_bdev_nvme.a 00:02:05.749 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:05.749 CC module/event/subsystems/keyring/keyring.o 00:02:05.749 CC module/event/subsystems/vmd/vmd.o 00:02:05.749 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:05.749 CC module/event/subsystems/sock/sock.o 00:02:05.749 CC module/event/subsystems/iobuf/iobuf.o 00:02:05.749 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:05.749 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:05.749 CC module/event/subsystems/scheduler/scheduler.o 00:02:05.749 LIB libspdk_event_keyring.a 00:02:05.749 LIB libspdk_event_vhost_blk.a 00:02:05.749 LIB libspdk_event_vmd.a 00:02:05.749 LIB libspdk_event_vfu_tgt.a 00:02:05.749 LIB libspdk_event_scheduler.a 00:02:05.749 LIB libspdk_event_sock.a 00:02:05.749 LIB libspdk_event_iobuf.a 00:02:06.008 CC module/event/subsystems/accel/accel.o 00:02:06.008 LIB libspdk_event_accel.a 00:02:06.576 CC module/event/subsystems/bdev/bdev.o 00:02:06.576 LIB libspdk_event_bdev.a 00:02:06.834 CC module/event/subsystems/scsi/scsi.o 00:02:06.834 CC module/event/subsystems/nbd/nbd.o 00:02:06.834 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:06.834 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:06.834 CC module/event/subsystems/ublk/ublk.o 00:02:06.834 LIB libspdk_event_scsi.a 00:02:06.834 LIB libspdk_event_nbd.a 00:02:07.093 LIB libspdk_event_ublk.a 00:02:07.093 LIB libspdk_event_nvmf.a 00:02:07.093 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:07.351 CC module/event/subsystems/iscsi/iscsi.o 00:02:07.351 LIB libspdk_event_vhost_scsi.a 00:02:07.351 LIB libspdk_event_iscsi.a 00:02:07.609 CC app/spdk_nvme_perf/perf.o 00:02:07.609 TEST_HEADER include/spdk/accel.h 00:02:07.609 TEST_HEADER include/spdk/accel_module.h 00:02:07.609 TEST_HEADER include/spdk/assert.h 00:02:07.609 TEST_HEADER include/spdk/base64.h 00:02:07.609 TEST_HEADER include/spdk/barrier.h 00:02:07.609 TEST_HEADER include/spdk/bdev.h 00:02:07.609 TEST_HEADER include/spdk/bdev_module.h 00:02:07.609 CC app/trace_record/trace_record.o 00:02:07.609 TEST_HEADER include/spdk/bit_array.h 00:02:07.609 CXX app/trace/trace.o 00:02:07.609 TEST_HEADER include/spdk/bit_pool.h 00:02:07.609 TEST_HEADER include/spdk/blob_bdev.h 00:02:07.609 TEST_HEADER include/spdk/bdev_zone.h 00:02:07.609 TEST_HEADER include/spdk/blobfs.h 00:02:07.609 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:07.609 TEST_HEADER include/spdk/conf.h 00:02:07.609 TEST_HEADER include/spdk/blob.h 00:02:07.609 TEST_HEADER include/spdk/config.h 00:02:07.609 TEST_HEADER include/spdk/cpuset.h 00:02:07.609 TEST_HEADER include/spdk/crc32.h 00:02:07.609 TEST_HEADER include/spdk/crc64.h 00:02:07.609 CC app/spdk_top/spdk_top.o 00:02:07.609 TEST_HEADER include/spdk/crc16.h 00:02:07.609 TEST_HEADER include/spdk/dif.h 00:02:07.609 TEST_HEADER include/spdk/dma.h 00:02:07.609 TEST_HEADER include/spdk/env_dpdk.h 00:02:07.609 TEST_HEADER include/spdk/endian.h 00:02:07.609 TEST_HEADER include/spdk/event.h 00:02:07.609 TEST_HEADER include/spdk/env.h 00:02:07.609 TEST_HEADER include/spdk/fd_group.h 00:02:07.609 TEST_HEADER include/spdk/fd.h 00:02:07.609 TEST_HEADER include/spdk/file.h 00:02:07.609 CC app/spdk_lspci/spdk_lspci.o 00:02:07.609 TEST_HEADER include/spdk/ftl.h 00:02:07.609 TEST_HEADER include/spdk/gpt_spec.h 00:02:07.609 TEST_HEADER include/spdk/hexlify.h 00:02:07.609 TEST_HEADER include/spdk/histogram_data.h 00:02:07.609 TEST_HEADER include/spdk/idxd.h 00:02:07.609 TEST_HEADER include/spdk/idxd_spec.h 00:02:07.609 TEST_HEADER include/spdk/ioat.h 00:02:07.609 TEST_HEADER include/spdk/ioat_spec.h 00:02:07.609 TEST_HEADER include/spdk/init.h 00:02:07.609 TEST_HEADER include/spdk/iscsi_spec.h 00:02:07.609 TEST_HEADER include/spdk/json.h 00:02:07.609 CC test/rpc_client/rpc_client_test.o 00:02:07.609 TEST_HEADER include/spdk/keyring.h 00:02:07.609 TEST_HEADER include/spdk/jsonrpc.h 00:02:07.609 TEST_HEADER include/spdk/keyring_module.h 00:02:07.609 TEST_HEADER include/spdk/likely.h 00:02:07.609 TEST_HEADER include/spdk/log.h 00:02:07.609 CC app/spdk_nvme_identify/identify.o 00:02:07.609 CC app/spdk_nvme_discover/discovery_aer.o 00:02:07.609 TEST_HEADER include/spdk/lvol.h 00:02:07.609 TEST_HEADER include/spdk/memory.h 00:02:07.609 TEST_HEADER include/spdk/mmio.h 00:02:07.609 TEST_HEADER include/spdk/nbd.h 00:02:07.609 TEST_HEADER include/spdk/notify.h 00:02:07.609 TEST_HEADER include/spdk/nvme.h 00:02:07.609 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:07.609 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:07.609 TEST_HEADER include/spdk/nvme_intel.h 00:02:07.609 TEST_HEADER include/spdk/nvme_zns.h 00:02:07.609 TEST_HEADER include/spdk/nvme_spec.h 00:02:07.609 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:07.609 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:07.609 TEST_HEADER include/spdk/nvmf.h 00:02:07.609 TEST_HEADER include/spdk/nvmf_transport.h 00:02:07.609 TEST_HEADER include/spdk/nvmf_spec.h 00:02:07.609 TEST_HEADER include/spdk/opal.h 00:02:07.609 TEST_HEADER include/spdk/pci_ids.h 00:02:07.609 TEST_HEADER include/spdk/opal_spec.h 00:02:07.609 TEST_HEADER include/spdk/pipe.h 00:02:07.609 TEST_HEADER include/spdk/queue.h 00:02:07.609 TEST_HEADER include/spdk/rpc.h 00:02:07.609 TEST_HEADER include/spdk/reduce.h 00:02:07.609 TEST_HEADER include/spdk/scheduler.h 00:02:07.609 TEST_HEADER include/spdk/scsi.h 00:02:07.610 TEST_HEADER include/spdk/scsi_spec.h 00:02:07.610 TEST_HEADER include/spdk/sock.h 00:02:07.610 TEST_HEADER include/spdk/stdinc.h 00:02:07.610 TEST_HEADER include/spdk/string.h 00:02:07.610 TEST_HEADER include/spdk/thread.h 00:02:07.610 TEST_HEADER include/spdk/trace.h 00:02:07.610 TEST_HEADER include/spdk/trace_parser.h 00:02:07.610 CC app/nvmf_tgt/nvmf_main.o 00:02:07.610 TEST_HEADER include/spdk/tree.h 00:02:07.610 TEST_HEADER include/spdk/ublk.h 00:02:07.610 TEST_HEADER include/spdk/util.h 00:02:07.610 TEST_HEADER include/spdk/uuid.h 00:02:07.610 TEST_HEADER include/spdk/version.h 00:02:07.610 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:07.610 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:07.610 TEST_HEADER include/spdk/vhost.h 00:02:07.610 TEST_HEADER include/spdk/vmd.h 00:02:07.610 TEST_HEADER include/spdk/xor.h 00:02:07.610 TEST_HEADER include/spdk/zipf.h 00:02:07.610 CXX test/cpp_headers/accel.o 00:02:07.610 CXX test/cpp_headers/accel_module.o 00:02:07.610 CC app/iscsi_tgt/iscsi_tgt.o 00:02:07.610 CXX test/cpp_headers/assert.o 00:02:07.610 CC app/spdk_dd/spdk_dd.o 00:02:07.610 CXX test/cpp_headers/barrier.o 00:02:07.870 CXX test/cpp_headers/base64.o 00:02:07.870 CXX test/cpp_headers/bdev.o 00:02:07.870 CXX test/cpp_headers/bdev_module.o 00:02:07.870 CXX test/cpp_headers/bdev_zone.o 00:02:07.870 CXX test/cpp_headers/bit_pool.o 00:02:07.870 CXX test/cpp_headers/bit_array.o 00:02:07.870 CXX test/cpp_headers/blob_bdev.o 00:02:07.870 CXX test/cpp_headers/blobfs_bdev.o 00:02:07.870 CXX test/cpp_headers/blobfs.o 00:02:07.870 CXX test/cpp_headers/blob.o 00:02:07.870 CXX test/cpp_headers/conf.o 00:02:07.870 CXX test/cpp_headers/config.o 00:02:07.870 CXX test/cpp_headers/cpuset.o 00:02:07.870 CXX test/cpp_headers/crc16.o 00:02:07.870 CXX test/cpp_headers/crc32.o 00:02:07.870 CXX test/cpp_headers/crc64.o 00:02:07.870 CXX test/cpp_headers/dif.o 00:02:07.870 CXX test/cpp_headers/dma.o 00:02:07.870 CXX test/cpp_headers/endian.o 00:02:07.870 CXX test/cpp_headers/env_dpdk.o 00:02:07.870 CXX test/cpp_headers/env.o 00:02:07.870 CXX test/cpp_headers/event.o 00:02:07.870 CXX test/cpp_headers/fd_group.o 00:02:07.870 CXX test/cpp_headers/fd.o 00:02:07.870 CXX test/cpp_headers/file.o 00:02:07.870 CXX test/cpp_headers/ftl.o 00:02:07.870 CXX test/cpp_headers/gpt_spec.o 00:02:07.870 CXX test/cpp_headers/hexlify.o 00:02:07.870 CXX test/cpp_headers/histogram_data.o 00:02:07.870 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:07.870 CXX test/cpp_headers/idxd.o 00:02:07.870 CXX test/cpp_headers/idxd_spec.o 00:02:07.870 CXX test/cpp_headers/init.o 00:02:07.870 CXX test/cpp_headers/ioat.o 00:02:07.870 CXX test/cpp_headers/ioat_spec.o 00:02:07.870 CXX test/cpp_headers/iscsi_spec.o 00:02:07.870 CXX test/cpp_headers/json.o 00:02:07.870 CXX test/cpp_headers/jsonrpc.o 00:02:07.870 CC test/env/memory/memory_ut.o 00:02:07.870 CC app/spdk_tgt/spdk_tgt.o 00:02:07.870 CC test/env/vtophys/vtophys.o 00:02:07.870 CC test/env/pci/pci_ut.o 00:02:07.870 CC test/app/histogram_perf/histogram_perf.o 00:02:07.870 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:07.870 CC examples/ioat/perf/perf.o 00:02:07.870 CC test/app/stub/stub.o 00:02:07.870 CC test/app/jsoncat/jsoncat.o 00:02:07.870 CC examples/ioat/verify/verify.o 00:02:07.870 CC test/thread/lock/spdk_lock.o 00:02:07.870 CXX test/cpp_headers/keyring.o 00:02:07.870 CC app/fio/nvme/fio_plugin.o 00:02:07.870 CC test/thread/poller_perf/poller_perf.o 00:02:07.870 CC examples/util/zipf/zipf.o 00:02:07.870 CC test/dma/test_dma/test_dma.o 00:02:07.870 CC test/app/bdev_svc/bdev_svc.o 00:02:07.870 CC app/fio/bdev/fio_plugin.o 00:02:07.870 LINK spdk_lspci 00:02:07.870 CC test/env/mem_callbacks/mem_callbacks.o 00:02:07.870 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:07.871 LINK rpc_client_test 00:02:07.871 CXX test/cpp_headers/keyring_module.o 00:02:07.871 CXX test/cpp_headers/likely.o 00:02:07.871 CXX test/cpp_headers/log.o 00:02:07.871 CXX test/cpp_headers/lvol.o 00:02:07.871 LINK spdk_trace_record 00:02:07.871 CXX test/cpp_headers/memory.o 00:02:07.871 LINK vtophys 00:02:07.871 CXX test/cpp_headers/mmio.o 00:02:07.871 CXX test/cpp_headers/nbd.o 00:02:07.871 CXX test/cpp_headers/notify.o 00:02:07.871 CXX test/cpp_headers/nvme.o 00:02:07.871 CXX test/cpp_headers/nvme_intel.o 00:02:07.871 LINK spdk_nvme_discover 00:02:07.871 CXX test/cpp_headers/nvme_ocssd.o 00:02:07.871 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:07.871 LINK histogram_perf 00:02:07.871 CXX test/cpp_headers/nvme_spec.o 00:02:07.871 CXX test/cpp_headers/nvme_zns.o 00:02:07.871 CXX test/cpp_headers/nvmf_cmd.o 00:02:07.871 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:07.871 CXX test/cpp_headers/nvmf.o 00:02:07.871 LINK jsoncat 00:02:07.871 CXX test/cpp_headers/nvmf_spec.o 00:02:07.871 CXX test/cpp_headers/nvmf_transport.o 00:02:07.871 CXX test/cpp_headers/opal.o 00:02:07.871 CXX test/cpp_headers/opal_spec.o 00:02:07.871 CXX test/cpp_headers/pci_ids.o 00:02:07.871 CXX test/cpp_headers/pipe.o 00:02:07.871 LINK nvmf_tgt 00:02:07.871 CXX test/cpp_headers/queue.o 00:02:07.871 CXX test/cpp_headers/reduce.o 00:02:08.131 CXX test/cpp_headers/rpc.o 00:02:08.131 CXX test/cpp_headers/scheduler.o 00:02:08.131 CXX test/cpp_headers/scsi.o 00:02:08.131 CXX test/cpp_headers/scsi_spec.o 00:02:08.131 CXX test/cpp_headers/sock.o 00:02:08.131 CXX test/cpp_headers/stdinc.o 00:02:08.131 CXX test/cpp_headers/string.o 00:02:08.131 LINK env_dpdk_post_init 00:02:08.131 CXX test/cpp_headers/thread.o 00:02:08.131 CXX test/cpp_headers/trace.o 00:02:08.131 LINK poller_perf 00:02:08.131 LINK zipf 00:02:08.131 CXX test/cpp_headers/trace_parser.o 00:02:08.131 LINK interrupt_tgt 00:02:08.131 LINK iscsi_tgt 00:02:08.131 CXX test/cpp_headers/tree.o 00:02:08.131 CXX test/cpp_headers/ublk.o 00:02:08.131 CXX test/cpp_headers/util.o 00:02:08.131 LINK stub 00:02:08.131 CXX test/cpp_headers/uuid.o 00:02:08.131 fio_plugin.c:1582:29: warning: field 'ruhs' with variable sized type 'struct spdk_nvme_fdp_ruhs' not at the end of a struct or class is a GNU extension [-Wgnu-variable-sized-type-not-at-end] 00:02:08.131 struct spdk_nvme_fdp_ruhs ruhs; 00:02:08.131 ^ 00:02:08.131 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:08.131 LINK ioat_perf 00:02:08.131 LINK verify 00:02:08.131 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:08.131 LINK spdk_tgt 00:02:08.131 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:08.131 LINK bdev_svc 00:02:08.131 CC test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.o 00:02:08.131 CC test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.o 00:02:08.131 CXX test/cpp_headers/version.o 00:02:08.131 CXX test/cpp_headers/vfio_user_pci.o 00:02:08.131 CXX test/cpp_headers/vfio_user_spec.o 00:02:08.131 CXX test/cpp_headers/vhost.o 00:02:08.131 CXX test/cpp_headers/vmd.o 00:02:08.131 LINK spdk_trace 00:02:08.131 CXX test/cpp_headers/xor.o 00:02:08.131 CXX test/cpp_headers/zipf.o 00:02:08.389 LINK test_dma 00:02:08.389 LINK pci_ut 00:02:08.389 LINK spdk_dd 00:02:08.389 1 warning generated. 00:02:08.389 LINK nvme_fuzz 00:02:08.389 LINK spdk_nvme 00:02:08.389 LINK mem_callbacks 00:02:08.389 LINK spdk_bdev 00:02:08.647 LINK spdk_nvme_perf 00:02:08.648 LINK llvm_vfio_fuzz 00:02:08.648 LINK spdk_nvme_identify 00:02:08.648 LINK vhost_fuzz 00:02:08.648 LINK spdk_top 00:02:08.648 CC examples/sock/hello_world/hello_sock.o 00:02:08.648 CC examples/idxd/perf/perf.o 00:02:08.648 CC examples/vmd/led/led.o 00:02:08.648 CC app/vhost/vhost.o 00:02:08.648 CC examples/vmd/lsvmd/lsvmd.o 00:02:08.648 CC examples/thread/thread/thread_ex.o 00:02:08.905 LINK llvm_nvme_fuzz 00:02:08.905 LINK memory_ut 00:02:08.905 LINK led 00:02:08.905 LINK lsvmd 00:02:08.905 LINK hello_sock 00:02:08.905 LINK vhost 00:02:08.905 LINK idxd_perf 00:02:08.905 LINK thread 00:02:08.905 LINK spdk_lock 00:02:09.473 LINK iscsi_fuzz 00:02:09.731 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:09.731 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:09.731 CC examples/nvme/arbitration/arbitration.o 00:02:09.731 CC examples/nvme/hotplug/hotplug.o 00:02:09.731 CC examples/nvme/reconnect/reconnect.o 00:02:09.731 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:09.731 CC examples/nvme/abort/abort.o 00:02:09.731 CC examples/nvme/hello_world/hello_world.o 00:02:09.731 CC test/event/reactor_perf/reactor_perf.o 00:02:09.731 CC test/event/event_perf/event_perf.o 00:02:09.731 CC test/event/reactor/reactor.o 00:02:09.731 CC test/event/app_repeat/app_repeat.o 00:02:09.731 LINK pmr_persistence 00:02:09.731 LINK cmb_copy 00:02:09.731 CC test/event/scheduler/scheduler.o 00:02:09.731 LINK hotplug 00:02:09.731 LINK hello_world 00:02:09.990 LINK reactor_perf 00:02:09.990 LINK event_perf 00:02:09.990 LINK reactor 00:02:09.990 LINK reconnect 00:02:09.990 LINK abort 00:02:09.990 LINK arbitration 00:02:09.990 LINK app_repeat 00:02:09.990 LINK nvme_manage 00:02:09.990 LINK scheduler 00:02:10.248 CC test/nvme/fused_ordering/fused_ordering.o 00:02:10.248 CC test/nvme/compliance/nvme_compliance.o 00:02:10.248 CC test/nvme/sgl/sgl.o 00:02:10.248 CC test/nvme/e2edp/nvme_dp.o 00:02:10.248 CC test/nvme/startup/startup.o 00:02:10.248 CC test/nvme/overhead/overhead.o 00:02:10.248 CC test/nvme/err_injection/err_injection.o 00:02:10.248 CC test/nvme/aer/aer.o 00:02:10.248 CC test/nvme/connect_stress/connect_stress.o 00:02:10.248 CC test/nvme/boot_partition/boot_partition.o 00:02:10.248 CC test/nvme/simple_copy/simple_copy.o 00:02:10.248 CC test/nvme/reset/reset.o 00:02:10.248 CC test/accel/dif/dif.o 00:02:10.248 CC test/nvme/reserve/reserve.o 00:02:10.248 CC test/nvme/fdp/fdp.o 00:02:10.248 CC test/nvme/cuse/cuse.o 00:02:10.248 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:10.248 CC test/blobfs/mkfs/mkfs.o 00:02:10.248 CC test/lvol/esnap/esnap.o 00:02:10.505 LINK startup 00:02:10.505 LINK boot_partition 00:02:10.505 LINK fused_ordering 00:02:10.505 LINK err_injection 00:02:10.505 LINK reserve 00:02:10.505 LINK connect_stress 00:02:10.505 LINK simple_copy 00:02:10.505 LINK aer 00:02:10.505 LINK nvme_dp 00:02:10.505 LINK doorbell_aers 00:02:10.505 LINK sgl 00:02:10.505 LINK mkfs 00:02:10.505 LINK reset 00:02:10.505 LINK overhead 00:02:10.505 LINK fdp 00:02:10.505 LINK nvme_compliance 00:02:10.764 LINK dif 00:02:10.764 CC examples/blob/cli/blobcli.o 00:02:11.041 CC examples/blob/hello_world/hello_blob.o 00:02:11.041 CC examples/accel/perf/accel_perf.o 00:02:11.041 LINK hello_blob 00:02:11.328 LINK cuse 00:02:11.328 LINK blobcli 00:02:11.328 LINK accel_perf 00:02:11.953 CC examples/bdev/hello_world/hello_bdev.o 00:02:11.953 CC examples/bdev/bdevperf/bdevperf.o 00:02:12.211 LINK hello_bdev 00:02:12.468 CC test/bdev/bdevio/bdevio.o 00:02:12.468 LINK bdevperf 00:02:12.725 LINK bdevio 00:02:14.098 LINK esnap 00:02:14.098 CC examples/nvmf/nvmf/nvmf.o 00:02:14.098 LINK nvmf 00:02:15.474 00:02:15.474 real 0m46.303s 00:02:15.474 user 6m9.125s 00:02:15.474 sys 2m21.007s 00:02:15.474 16:13:01 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:15.474 16:13:01 make -- common/autotest_common.sh@10 -- $ set +x 00:02:15.474 ************************************ 00:02:15.474 END TEST make 00:02:15.474 ************************************ 00:02:15.734 16:13:01 -- common/autotest_common.sh@1142 -- $ return 0 00:02:15.734 16:13:01 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:15.734 16:13:01 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:15.734 16:13:01 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:15.734 16:13:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:15.734 16:13:01 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:15.734 16:13:01 -- pm/common@44 -- $ pid=1391169 00:02:15.734 16:13:01 -- pm/common@50 -- $ kill -TERM 1391169 00:02:15.734 16:13:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:15.734 16:13:01 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:15.734 16:13:01 -- pm/common@44 -- $ pid=1391171 00:02:15.734 16:13:01 -- pm/common@50 -- $ kill -TERM 1391171 00:02:15.734 16:13:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:15.734 16:13:01 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:15.734 16:13:01 -- pm/common@44 -- $ pid=1391173 00:02:15.734 16:13:01 -- pm/common@50 -- $ kill -TERM 1391173 00:02:15.734 16:13:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:15.734 16:13:01 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:15.734 16:13:01 -- pm/common@44 -- $ pid=1391198 00:02:15.734 16:13:01 -- pm/common@50 -- $ sudo -E kill -TERM 1391198 00:02:15.734 16:13:01 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:02:15.734 16:13:01 -- nvmf/common.sh@7 -- # uname -s 00:02:15.734 16:13:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:15.734 16:13:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:15.734 16:13:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:15.734 16:13:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:15.734 16:13:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:15.734 16:13:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:15.734 16:13:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:15.734 16:13:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:15.734 16:13:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:15.734 16:13:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:15.734 16:13:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8023d868-666a-e711-906e-0017a4403562 00:02:15.734 16:13:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=8023d868-666a-e711-906e-0017a4403562 00:02:15.734 16:13:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:15.734 16:13:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:15.734 16:13:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:15.734 16:13:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:15.734 16:13:01 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:02:15.734 16:13:01 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:15.734 16:13:01 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:15.734 16:13:01 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:15.734 16:13:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:15.734 16:13:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:15.734 16:13:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:15.734 16:13:01 -- paths/export.sh@5 -- # export PATH 00:02:15.734 16:13:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:15.734 16:13:01 -- nvmf/common.sh@47 -- # : 0 00:02:15.734 16:13:01 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:15.734 16:13:01 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:15.734 16:13:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:15.734 16:13:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:15.734 16:13:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:15.734 16:13:01 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:15.734 16:13:01 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:15.734 16:13:01 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:15.734 16:13:01 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:15.734 16:13:01 -- spdk/autotest.sh@32 -- # uname -s 00:02:15.734 16:13:01 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:15.734 16:13:01 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:15.734 16:13:01 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:02:15.734 16:13:01 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:15.734 16:13:01 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:02:15.734 16:13:01 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:15.734 16:13:01 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:15.734 16:13:01 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:15.734 16:13:01 -- spdk/autotest.sh@48 -- # udevadm_pid=1449619 00:02:15.734 16:13:01 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:15.734 16:13:01 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:15.734 16:13:01 -- pm/common@17 -- # local monitor 00:02:15.734 16:13:01 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:15.734 16:13:01 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:15.734 16:13:01 -- pm/common@21 -- # date +%s 00:02:15.734 16:13:01 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:15.734 16:13:01 -- pm/common@21 -- # date +%s 00:02:15.734 16:13:01 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:15.734 16:13:01 -- pm/common@21 -- # date +%s 00:02:15.734 16:13:01 -- pm/common@25 -- # sleep 1 00:02:15.734 16:13:01 -- pm/common@21 -- # date +%s 00:02:15.734 16:13:01 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721052781 00:02:15.734 16:13:01 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721052781 00:02:15.734 16:13:01 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721052781 00:02:15.734 16:13:01 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721052781 00:02:15.734 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721052781_collect-vmstat.pm.log 00:02:15.734 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721052781_collect-cpu-load.pm.log 00:02:15.734 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721052781_collect-cpu-temp.pm.log 00:02:15.993 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721052781_collect-bmc-pm.bmc.pm.log 00:02:16.930 16:13:02 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:16.930 16:13:02 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:16.930 16:13:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:16.930 16:13:02 -- common/autotest_common.sh@10 -- # set +x 00:02:16.930 16:13:02 -- spdk/autotest.sh@59 -- # create_test_list 00:02:16.930 16:13:02 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:16.930 16:13:02 -- common/autotest_common.sh@10 -- # set +x 00:02:16.930 16:13:02 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh 00:02:16.930 16:13:02 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:16.930 16:13:02 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:16.930 16:13:02 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:02:16.930 16:13:02 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:16.930 16:13:02 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:16.930 16:13:02 -- common/autotest_common.sh@1455 -- # uname 00:02:16.930 16:13:02 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:16.930 16:13:02 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:16.930 16:13:02 -- common/autotest_common.sh@1475 -- # uname 00:02:16.930 16:13:02 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:16.930 16:13:02 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:16.930 16:13:02 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=clang 00:02:16.930 16:13:02 -- spdk/autotest.sh@72 -- # hash lcov 00:02:16.930 16:13:02 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=clang == *\c\l\a\n\g* ]] 00:02:16.930 16:13:02 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:16.930 16:13:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:16.930 16:13:02 -- common/autotest_common.sh@10 -- # set +x 00:02:16.930 16:13:02 -- spdk/autotest.sh@91 -- # rm -f 00:02:16.930 16:13:02 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:02:20.216 0000:1a:00.0 (8086 0a54): Already using the nvme driver 00:02:20.216 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:20.216 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:20.474 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:20.474 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:20.474 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:20.474 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:20.474 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:20.474 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:20.474 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:20.474 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:20.474 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:20.733 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:20.733 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:20.733 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:20.733 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:20.733 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:22.633 16:13:08 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:22.633 16:13:08 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:22.633 16:13:08 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:22.633 16:13:08 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:22.633 16:13:08 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:22.633 16:13:08 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:22.633 16:13:08 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:22.633 16:13:08 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:22.633 16:13:08 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:22.633 16:13:08 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:22.633 16:13:08 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:22.633 16:13:08 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:22.633 16:13:08 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:22.633 16:13:08 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:22.633 16:13:08 -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:22.633 No valid GPT data, bailing 00:02:22.633 16:13:08 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:22.633 16:13:08 -- scripts/common.sh@391 -- # pt= 00:02:22.633 16:13:08 -- scripts/common.sh@392 -- # return 1 00:02:22.633 16:13:08 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:22.633 1+0 records in 00:02:22.633 1+0 records out 00:02:22.633 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00679586 s, 154 MB/s 00:02:22.633 16:13:08 -- spdk/autotest.sh@118 -- # sync 00:02:22.633 16:13:08 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:22.633 16:13:08 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:22.633 16:13:08 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:27.904 16:13:13 -- spdk/autotest.sh@124 -- # uname -s 00:02:27.904 16:13:13 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:27.904 16:13:13 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:02:27.904 16:13:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:27.904 16:13:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:27.904 16:13:13 -- common/autotest_common.sh@10 -- # set +x 00:02:27.904 ************************************ 00:02:27.904 START TEST setup.sh 00:02:27.904 ************************************ 00:02:27.904 16:13:13 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:02:27.904 * Looking for test storage... 00:02:27.904 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:02:27.904 16:13:13 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:27.904 16:13:13 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:27.904 16:13:13 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:02:27.904 16:13:13 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:27.904 16:13:13 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:27.904 16:13:13 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:27.904 ************************************ 00:02:27.904 START TEST acl 00:02:27.904 ************************************ 00:02:27.904 16:13:13 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:02:27.904 * Looking for test storage... 00:02:27.904 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:02:27.904 16:13:13 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:27.904 16:13:13 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:27.904 16:13:13 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:27.904 16:13:13 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:27.904 16:13:13 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:27.904 16:13:13 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:27.904 16:13:13 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:27.904 16:13:13 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:27.904 16:13:13 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:27.904 16:13:13 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:27.904 16:13:13 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:27.904 16:13:13 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:27.904 16:13:13 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:27.904 16:13:13 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:27.904 16:13:13 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:27.904 16:13:13 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:02:34.463 16:13:19 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:34.463 16:13:19 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:34.463 16:13:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.463 16:13:19 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:34.463 16:13:19 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:34.463 16:13:19 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:02:36.996 Hugepages 00:02:36.996 node hugesize free / total 00:02:36.996 16:13:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:36.996 16:13:22 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:36.996 16:13:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:36.996 16:13:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:36.996 16:13:22 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:36.996 16:13:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:36.996 16:13:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:36.996 16:13:22 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:36.996 16:13:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:36.996 00:02:36.996 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:36.996 16:13:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:36.996 16:13:22 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:36.996 16:13:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:36.996 16:13:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:36.996 16:13:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:36.996 16:13:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:36.996 16:13:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:36.996 16:13:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:36.996 16:13:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:36.996 16:13:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:36.996 16:13:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:36.996 16:13:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:36.996 16:13:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:36.996 16:13:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:36.996 16:13:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:36.996 16:13:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:36.996 16:13:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:36.996 16:13:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:36.996 16:13:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:36.996 16:13:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:36.996 16:13:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:36.996 16:13:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:36.996 16:13:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:36.996 16:13:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:36.996 16:13:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:36.996 16:13:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:36.996 16:13:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:36.996 16:13:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:36.996 16:13:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:36.996 16:13:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:36.996 16:13:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:36.996 16:13:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:36.996 16:13:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:36.996 16:13:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:36.996 16:13:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:36.996 16:13:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:1a:00.0 == *:*:*.* ]] 00:02:36.996 16:13:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:36.996 16:13:22 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\1\a\:\0\0\.\0* ]] 00:02:36.996 16:13:22 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:36.996 16:13:22 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:36.996 16:13:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:36.996 16:13:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:36.996 16:13:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:36.996 16:13:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:36.996 16:13:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:36.996 16:13:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:36.996 16:13:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:36.996 16:13:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:36.996 16:13:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.255 16:13:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:37.255 16:13:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:37.255 16:13:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:37.255 16:13:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.255 16:13:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:37.255 16:13:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:37.255 16:13:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:37.255 16:13:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.255 16:13:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:37.255 16:13:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:37.255 16:13:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:37.255 16:13:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.256 16:13:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:37.256 16:13:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:37.256 16:13:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:37.256 16:13:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.256 16:13:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:37.256 16:13:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:37.256 16:13:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:37.256 16:13:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.256 16:13:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:37.256 16:13:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:37.256 16:13:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:37.256 16:13:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.256 16:13:22 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:37.256 16:13:22 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:37.256 16:13:22 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:37.256 16:13:22 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:37.256 16:13:22 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:37.256 ************************************ 00:02:37.256 START TEST denied 00:02:37.256 ************************************ 00:02:37.256 16:13:22 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:02:37.256 16:13:22 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:1a:00.0' 00:02:37.256 16:13:22 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:37.256 16:13:22 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:1a:00.0' 00:02:37.256 16:13:22 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:37.256 16:13:22 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:02:42.523 0000:1a:00.0 (8086 0a54): Skipping denied controller at 0000:1a:00.0 00:02:42.523 16:13:28 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:1a:00.0 00:02:42.523 16:13:28 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:42.523 16:13:28 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:42.524 16:13:28 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:1a:00.0 ]] 00:02:42.524 16:13:28 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:1a:00.0/driver 00:02:42.524 16:13:28 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:42.524 16:13:28 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:42.524 16:13:28 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:42.524 16:13:28 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:42.524 16:13:28 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:02:50.645 00:02:50.645 real 0m12.182s 00:02:50.645 user 0m3.864s 00:02:50.645 sys 0m7.553s 00:02:50.645 16:13:34 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:50.645 16:13:34 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:02:50.645 ************************************ 00:02:50.645 END TEST denied 00:02:50.645 ************************************ 00:02:50.645 16:13:34 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:02:50.645 16:13:34 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:50.645 16:13:34 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:50.645 16:13:34 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:50.645 16:13:34 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:50.645 ************************************ 00:02:50.645 START TEST allowed 00:02:50.645 ************************************ 00:02:50.645 16:13:34 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:02:50.645 16:13:34 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:1a:00.0 .*: nvme -> .*' 00:02:50.645 16:13:34 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:1a:00.0 00:02:50.645 16:13:34 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:02:50.645 16:13:34 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:02:50.645 16:13:34 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:02:58.767 0000:1a:00.0 (8086 0a54): nvme -> vfio-pci 00:02:58.767 16:13:43 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:02:58.767 16:13:43 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:02:58.767 16:13:43 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:02:58.767 16:13:43 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:58.767 16:13:43 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:04.039 00:03:04.039 real 0m13.998s 00:03:04.039 user 0m3.393s 00:03:04.039 sys 0m7.376s 00:03:04.039 16:13:48 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:04.039 16:13:48 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:04.039 ************************************ 00:03:04.039 END TEST allowed 00:03:04.039 ************************************ 00:03:04.039 16:13:48 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:04.039 00:03:04.039 real 0m35.622s 00:03:04.039 user 0m10.605s 00:03:04.039 sys 0m21.211s 00:03:04.039 16:13:48 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:04.039 16:13:48 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:04.039 ************************************ 00:03:04.039 END TEST acl 00:03:04.039 ************************************ 00:03:04.039 16:13:48 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:04.039 16:13:48 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:03:04.039 16:13:48 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:04.039 16:13:48 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:04.039 16:13:48 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:04.039 ************************************ 00:03:04.039 START TEST hugepages 00:03:04.039 ************************************ 00:03:04.039 16:13:49 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:03:04.039 * Looking for test storage... 00:03:04.039 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 73269564 kB' 'MemAvailable: 76747720 kB' 'Buffers: 4360 kB' 'Cached: 11429376 kB' 'SwapCached: 0 kB' 'Active: 8529096 kB' 'Inactive: 3529752 kB' 'Active(anon): 8026348 kB' 'Inactive(anon): 0 kB' 'Active(file): 502748 kB' 'Inactive(file): 3529752 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 628332 kB' 'Mapped: 185648 kB' 'Shmem: 7401236 kB' 'KReclaimable: 198128 kB' 'Slab: 557928 kB' 'SReclaimable: 198128 kB' 'SUnreclaim: 359800 kB' 'KernelStack: 16336 kB' 'PageTables: 8660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52438216 kB' 'Committed_AS: 9455404 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212104 kB' 'VmallocChunk: 0 kB' 'Percpu: 57600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.039 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:04.040 16:13:49 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:04.041 16:13:49 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:04.041 16:13:49 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:04.041 16:13:49 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:04.041 16:13:49 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:04.041 16:13:49 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:04.041 16:13:49 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:04.041 16:13:49 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:04.041 16:13:49 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:04.041 16:13:49 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:04.041 16:13:49 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:04.041 16:13:49 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:04.041 16:13:49 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:04.041 ************************************ 00:03:04.041 START TEST default_setup 00:03:04.041 ************************************ 00:03:04.041 16:13:49 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:04.041 16:13:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:04.041 16:13:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:04.041 16:13:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:04.041 16:13:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:04.041 16:13:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:04.041 16:13:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:04.041 16:13:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:04.041 16:13:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:04.041 16:13:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:04.041 16:13:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:04.041 16:13:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:04.041 16:13:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:04.041 16:13:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:04.041 16:13:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:04.041 16:13:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:04.041 16:13:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:04.041 16:13:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:04.041 16:13:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:04.041 16:13:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:04.041 16:13:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:04.041 16:13:49 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:04.041 16:13:49 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:07.318 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:07.318 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:07.318 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:07.318 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:07.318 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:07.318 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:07.318 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:07.318 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:07.318 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:07.318 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:07.318 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:07.318 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:07.318 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:07.318 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:07.318 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:07.318 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:10.601 0000:1a:00.0 (8086 0a54): nvme -> vfio-pci 00:03:12.509 16:13:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:12.509 16:13:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:12.509 16:13:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:12.509 16:13:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:12.509 16:13:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:12.509 16:13:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:12.509 16:13:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:12.509 16:13:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:12.509 16:13:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:12.509 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:12.509 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:12.509 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:12.509 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:12.509 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.509 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.509 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.509 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.509 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.509 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.509 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.509 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 75481232 kB' 'MemAvailable: 78959324 kB' 'Buffers: 4360 kB' 'Cached: 11429528 kB' 'SwapCached: 0 kB' 'Active: 8545424 kB' 'Inactive: 3529752 kB' 'Active(anon): 8042676 kB' 'Inactive(anon): 0 kB' 'Active(file): 502748 kB' 'Inactive(file): 3529752 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 644596 kB' 'Mapped: 185788 kB' 'Shmem: 7401388 kB' 'KReclaimable: 198000 kB' 'Slab: 556324 kB' 'SReclaimable: 198000 kB' 'SUnreclaim: 358324 kB' 'KernelStack: 16432 kB' 'PageTables: 8568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 9472344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212136 kB' 'VmallocChunk: 0 kB' 'Percpu: 57600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:03:12.509 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.509 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.509 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.509 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.509 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.509 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.509 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.509 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.509 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.509 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.509 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.509 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.509 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.509 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.509 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.509 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.509 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.509 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.509 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.509 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.509 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.509 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.510 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 75481088 kB' 'MemAvailable: 78959180 kB' 'Buffers: 4360 kB' 'Cached: 11429532 kB' 'SwapCached: 0 kB' 'Active: 8545116 kB' 'Inactive: 3529752 kB' 'Active(anon): 8042368 kB' 'Inactive(anon): 0 kB' 'Active(file): 502748 kB' 'Inactive(file): 3529752 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 644376 kB' 'Mapped: 185764 kB' 'Shmem: 7401392 kB' 'KReclaimable: 198000 kB' 'Slab: 556304 kB' 'SReclaimable: 198000 kB' 'SUnreclaim: 358304 kB' 'KernelStack: 16448 kB' 'PageTables: 8624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 9472360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212136 kB' 'VmallocChunk: 0 kB' 'Percpu: 57600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.511 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:12.512 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 75481088 kB' 'MemAvailable: 78959180 kB' 'Buffers: 4360 kB' 'Cached: 11429552 kB' 'SwapCached: 0 kB' 'Active: 8545080 kB' 'Inactive: 3529752 kB' 'Active(anon): 8042332 kB' 'Inactive(anon): 0 kB' 'Active(file): 502748 kB' 'Inactive(file): 3529752 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 644260 kB' 'Mapped: 185764 kB' 'Shmem: 7401412 kB' 'KReclaimable: 198000 kB' 'Slab: 556304 kB' 'SReclaimable: 198000 kB' 'SUnreclaim: 358304 kB' 'KernelStack: 16432 kB' 'PageTables: 8576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 9472176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212136 kB' 'VmallocChunk: 0 kB' 'Percpu: 57600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.513 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.514 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:12.515 nr_hugepages=1024 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:12.515 resv_hugepages=0 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:12.515 surplus_hugepages=0 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:12.515 anon_hugepages=0 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 75480620 kB' 'MemAvailable: 78958712 kB' 'Buffers: 4360 kB' 'Cached: 11429572 kB' 'SwapCached: 0 kB' 'Active: 8545148 kB' 'Inactive: 3529752 kB' 'Active(anon): 8042400 kB' 'Inactive(anon): 0 kB' 'Active(file): 502748 kB' 'Inactive(file): 3529752 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 644316 kB' 'Mapped: 185860 kB' 'Shmem: 7401432 kB' 'KReclaimable: 198000 kB' 'Slab: 556304 kB' 'SReclaimable: 198000 kB' 'SUnreclaim: 358304 kB' 'KernelStack: 16416 kB' 'PageTables: 8512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 9472040 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212072 kB' 'VmallocChunk: 0 kB' 'Percpu: 57600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.515 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.516 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48069912 kB' 'MemFree: 40846096 kB' 'MemUsed: 7223816 kB' 'SwapCached: 0 kB' 'Active: 3058108 kB' 'Inactive: 115160 kB' 'Active(anon): 2731748 kB' 'Inactive(anon): 0 kB' 'Active(file): 326360 kB' 'Inactive(file): 115160 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2792004 kB' 'Mapped: 80904 kB' 'AnonPages: 384412 kB' 'Shmem: 2350484 kB' 'KernelStack: 8232 kB' 'PageTables: 3732 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 71384 kB' 'Slab: 256948 kB' 'SReclaimable: 71384 kB' 'SUnreclaim: 185564 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.517 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.518 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.518 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.518 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.518 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.518 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.518 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.518 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.518 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.518 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.518 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.518 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.518 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.518 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.518 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.518 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.518 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.518 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.518 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.518 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.518 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.518 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.518 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.518 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.518 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.518 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.518 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.518 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.518 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.518 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:12.518 16:13:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:12.518 16:13:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:12.518 16:13:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:12.518 16:13:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:12.518 16:13:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:12.518 16:13:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:12.518 node0=1024 expecting 1024 00:03:12.518 16:13:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:12.518 00:03:12.518 real 0m8.576s 00:03:12.518 user 0m1.986s 00:03:12.518 sys 0m3.555s 00:03:12.518 16:13:57 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:12.518 16:13:57 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:12.518 ************************************ 00:03:12.518 END TEST default_setup 00:03:12.518 ************************************ 00:03:12.518 16:13:57 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:12.518 16:13:57 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:12.518 16:13:57 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:12.518 16:13:57 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:12.518 16:13:57 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:12.518 ************************************ 00:03:12.518 START TEST per_node_1G_alloc 00:03:12.518 ************************************ 00:03:12.518 16:13:57 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:12.518 16:13:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:12.518 16:13:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:12.518 16:13:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:12.518 16:13:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:12.518 16:13:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:12.518 16:13:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:12.518 16:13:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:12.518 16:13:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:12.518 16:13:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:12.518 16:13:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:12.518 16:13:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:12.518 16:13:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:12.518 16:13:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:12.518 16:13:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:12.518 16:13:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:12.518 16:13:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:12.518 16:13:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:12.518 16:13:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:12.518 16:13:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:12.518 16:13:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:12.518 16:13:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:12.518 16:13:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:12.518 16:13:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:12.518 16:13:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:12.518 16:13:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:12.518 16:13:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:12.518 16:13:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:15.919 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:15.920 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:15.920 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:15.920 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:15.920 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:15.920 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:15.920 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:15.920 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:15.920 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:15.920 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:15.920 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:15.920 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:15.920 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:15.920 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:15.920 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:15.920 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:15.920 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:17.837 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 75521100 kB' 'MemAvailable: 78999192 kB' 'Buffers: 4360 kB' 'Cached: 11429704 kB' 'SwapCached: 0 kB' 'Active: 8541280 kB' 'Inactive: 3529752 kB' 'Active(anon): 8038532 kB' 'Inactive(anon): 0 kB' 'Active(file): 502748 kB' 'Inactive(file): 3529752 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 640188 kB' 'Mapped: 184788 kB' 'Shmem: 7401564 kB' 'KReclaimable: 198000 kB' 'Slab: 556400 kB' 'SReclaimable: 198000 kB' 'SUnreclaim: 358400 kB' 'KernelStack: 16464 kB' 'PageTables: 8580 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 9463308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212200 kB' 'VmallocChunk: 0 kB' 'Percpu: 57600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.838 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 75524136 kB' 'MemAvailable: 79002228 kB' 'Buffers: 4360 kB' 'Cached: 11429708 kB' 'SwapCached: 0 kB' 'Active: 8540916 kB' 'Inactive: 3529752 kB' 'Active(anon): 8038168 kB' 'Inactive(anon): 0 kB' 'Active(file): 502748 kB' 'Inactive(file): 3529752 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 639896 kB' 'Mapped: 184716 kB' 'Shmem: 7401568 kB' 'KReclaimable: 198000 kB' 'Slab: 556384 kB' 'SReclaimable: 198000 kB' 'SUnreclaim: 358384 kB' 'KernelStack: 16416 kB' 'PageTables: 8440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 9462956 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212104 kB' 'VmallocChunk: 0 kB' 'Percpu: 57600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.839 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 75524136 kB' 'MemAvailable: 79002228 kB' 'Buffers: 4360 kB' 'Cached: 11429728 kB' 'SwapCached: 0 kB' 'Active: 8540696 kB' 'Inactive: 3529752 kB' 'Active(anon): 8037948 kB' 'Inactive(anon): 0 kB' 'Active(file): 502748 kB' 'Inactive(file): 3529752 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 639564 kB' 'Mapped: 184716 kB' 'Shmem: 7401588 kB' 'KReclaimable: 198000 kB' 'Slab: 556384 kB' 'SReclaimable: 198000 kB' 'SUnreclaim: 358384 kB' 'KernelStack: 16384 kB' 'PageTables: 8300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 9462984 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212120 kB' 'VmallocChunk: 0 kB' 'Percpu: 57600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.840 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:17.841 nr_hugepages=1024 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:17.841 resv_hugepages=0 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:17.841 surplus_hugepages=0 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:17.841 anon_hugepages=0 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 75523884 kB' 'MemAvailable: 79001976 kB' 'Buffers: 4360 kB' 'Cached: 11429756 kB' 'SwapCached: 0 kB' 'Active: 8541144 kB' 'Inactive: 3529752 kB' 'Active(anon): 8038396 kB' 'Inactive(anon): 0 kB' 'Active(file): 502748 kB' 'Inactive(file): 3529752 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 640000 kB' 'Mapped: 184716 kB' 'Shmem: 7401616 kB' 'KReclaimable: 198000 kB' 'Slab: 556384 kB' 'SReclaimable: 198000 kB' 'SUnreclaim: 358384 kB' 'KernelStack: 16432 kB' 'PageTables: 8504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 9463508 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212136 kB' 'VmallocChunk: 0 kB' 'Percpu: 57600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.841 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.842 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.843 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.843 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.843 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.843 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.843 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.843 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.843 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.843 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.843 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.843 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.843 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.843 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.843 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.843 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.843 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.843 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.843 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.843 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.843 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.843 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.843 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.843 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.843 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.843 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.843 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.844 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.844 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.844 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.844 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.844 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.844 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.844 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.844 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.844 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.844 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.844 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.844 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.844 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.844 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.844 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.844 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:17.844 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:17.844 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:17.844 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:17.844 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:17.844 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:17.844 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:17.844 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:17.844 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:17.844 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:17.844 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:17.844 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48069912 kB' 'MemFree: 41941008 kB' 'MemUsed: 6128904 kB' 'SwapCached: 0 kB' 'Active: 3055488 kB' 'Inactive: 115160 kB' 'Active(anon): 2729128 kB' 'Inactive(anon): 0 kB' 'Active(file): 326360 kB' 'Inactive(file): 115160 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2792028 kB' 'Mapped: 79968 kB' 'AnonPages: 381708 kB' 'Shmem: 2350508 kB' 'KernelStack: 8216 kB' 'PageTables: 3616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 71384 kB' 'Slab: 257044 kB' 'SReclaimable: 71384 kB' 'SUnreclaim: 185660 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.845 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.846 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.846 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.846 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.846 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.846 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.846 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.846 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.846 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.846 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.846 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.846 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.846 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.846 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.846 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.846 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.846 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.846 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.846 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.846 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.846 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.846 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.846 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.847 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.847 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.847 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.847 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.847 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:17.847 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:17.847 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:17.847 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:17.847 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:17.847 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:17.847 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:17.847 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:17.847 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:17.847 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.847 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.847 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:17.847 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:17.847 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.847 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.847 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.847 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.847 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44223620 kB' 'MemFree: 33582136 kB' 'MemUsed: 10641484 kB' 'SwapCached: 0 kB' 'Active: 5485716 kB' 'Inactive: 3414592 kB' 'Active(anon): 5309328 kB' 'Inactive(anon): 0 kB' 'Active(file): 176388 kB' 'Inactive(file): 3414592 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8642132 kB' 'Mapped: 104748 kB' 'AnonPages: 258308 kB' 'Shmem: 5051152 kB' 'KernelStack: 8216 kB' 'PageTables: 4868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 126616 kB' 'Slab: 299340 kB' 'SReclaimable: 126616 kB' 'SUnreclaim: 172724 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:17.847 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.848 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.848 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.848 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.848 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.848 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.848 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.848 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.848 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.848 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.848 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.848 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.848 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.848 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.848 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.848 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.848 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.848 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.848 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.848 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.849 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.850 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.850 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.850 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.850 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.850 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.850 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.850 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.850 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.850 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.850 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.850 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.850 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.850 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.850 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.850 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.850 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.850 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.850 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.850 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.850 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.850 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.850 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.850 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.850 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.850 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.850 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.850 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.850 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.850 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.850 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.850 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.850 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.850 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.850 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.850 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.850 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.850 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.850 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.850 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.850 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.850 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.850 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.850 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:18.107 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:18.107 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:18.107 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:18.107 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:18.107 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:18.107 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:18.107 node0=512 expecting 512 00:03:18.107 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:18.107 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:18.107 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:18.107 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:18.107 node1=512 expecting 512 00:03:18.107 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:18.107 00:03:18.107 real 0m5.513s 00:03:18.107 user 0m2.021s 00:03:18.107 sys 0m3.542s 00:03:18.107 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:18.107 16:14:03 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:18.107 ************************************ 00:03:18.107 END TEST per_node_1G_alloc 00:03:18.107 ************************************ 00:03:18.108 16:14:03 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:18.108 16:14:03 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:18.108 16:14:03 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:18.108 16:14:03 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:18.108 16:14:03 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:18.108 ************************************ 00:03:18.108 START TEST even_2G_alloc 00:03:18.108 ************************************ 00:03:18.108 16:14:03 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:18.108 16:14:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:18.108 16:14:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:18.108 16:14:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:18.108 16:14:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:18.108 16:14:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:18.108 16:14:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:18.108 16:14:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:18.108 16:14:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:18.108 16:14:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:18.108 16:14:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:18.108 16:14:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:18.108 16:14:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:18.108 16:14:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:18.108 16:14:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:18.108 16:14:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:18.108 16:14:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:18.108 16:14:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:18.108 16:14:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:18.108 16:14:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:18.108 16:14:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:18.108 16:14:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:18.108 16:14:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:18.108 16:14:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:18.108 16:14:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:18.108 16:14:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:18.108 16:14:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:18.108 16:14:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:18.108 16:14:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:21.383 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:21.383 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:21.383 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:21.383 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:21.383 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:21.383 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:21.383 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:21.383 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:21.383 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:21.383 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:21.383 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:21.383 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:21.383 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:21.383 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:21.383 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:21.383 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:21.383 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 75555720 kB' 'MemAvailable: 79033812 kB' 'Buffers: 4360 kB' 'Cached: 11429892 kB' 'SwapCached: 0 kB' 'Active: 8542404 kB' 'Inactive: 3529752 kB' 'Active(anon): 8039656 kB' 'Inactive(anon): 0 kB' 'Active(file): 502748 kB' 'Inactive(file): 3529752 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 640664 kB' 'Mapped: 184920 kB' 'Shmem: 7401752 kB' 'KReclaimable: 198000 kB' 'Slab: 556680 kB' 'SReclaimable: 198000 kB' 'SUnreclaim: 358680 kB' 'KernelStack: 16432 kB' 'PageTables: 8504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 9464016 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212104 kB' 'VmallocChunk: 0 kB' 'Percpu: 57600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.288 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 75557016 kB' 'MemAvailable: 79035108 kB' 'Buffers: 4360 kB' 'Cached: 11429896 kB' 'SwapCached: 0 kB' 'Active: 8541868 kB' 'Inactive: 3529752 kB' 'Active(anon): 8039120 kB' 'Inactive(anon): 0 kB' 'Active(file): 502748 kB' 'Inactive(file): 3529752 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 640584 kB' 'Mapped: 184792 kB' 'Shmem: 7401756 kB' 'KReclaimable: 198000 kB' 'Slab: 556688 kB' 'SReclaimable: 198000 kB' 'SUnreclaim: 358688 kB' 'KernelStack: 16432 kB' 'PageTables: 8488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 9464032 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212072 kB' 'VmallocChunk: 0 kB' 'Percpu: 57600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.289 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.290 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 75557016 kB' 'MemAvailable: 79035108 kB' 'Buffers: 4360 kB' 'Cached: 11429896 kB' 'SwapCached: 0 kB' 'Active: 8541868 kB' 'Inactive: 3529752 kB' 'Active(anon): 8039120 kB' 'Inactive(anon): 0 kB' 'Active(file): 502748 kB' 'Inactive(file): 3529752 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 640584 kB' 'Mapped: 184792 kB' 'Shmem: 7401756 kB' 'KReclaimable: 198000 kB' 'Slab: 556688 kB' 'SReclaimable: 198000 kB' 'SUnreclaim: 358688 kB' 'KernelStack: 16432 kB' 'PageTables: 8488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 9464056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212072 kB' 'VmallocChunk: 0 kB' 'Percpu: 57600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.291 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.292 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:23.293 nr_hugepages=1024 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:23.293 resv_hugepages=0 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:23.293 surplus_hugepages=0 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:23.293 anon_hugepages=0 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 75557800 kB' 'MemAvailable: 79035892 kB' 'Buffers: 4360 kB' 'Cached: 11429932 kB' 'SwapCached: 0 kB' 'Active: 8541720 kB' 'Inactive: 3529752 kB' 'Active(anon): 8038972 kB' 'Inactive(anon): 0 kB' 'Active(file): 502748 kB' 'Inactive(file): 3529752 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 640380 kB' 'Mapped: 184792 kB' 'Shmem: 7401792 kB' 'KReclaimable: 198000 kB' 'Slab: 556688 kB' 'SReclaimable: 198000 kB' 'SUnreclaim: 358688 kB' 'KernelStack: 16416 kB' 'PageTables: 8444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 9464076 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212072 kB' 'VmallocChunk: 0 kB' 'Percpu: 57600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.293 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.294 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48069912 kB' 'MemFree: 41937332 kB' 'MemUsed: 6132580 kB' 'SwapCached: 0 kB' 'Active: 3057264 kB' 'Inactive: 115160 kB' 'Active(anon): 2730904 kB' 'Inactive(anon): 0 kB' 'Active(file): 326360 kB' 'Inactive(file): 115160 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2792056 kB' 'Mapped: 79980 kB' 'AnonPages: 383552 kB' 'Shmem: 2350536 kB' 'KernelStack: 8264 kB' 'PageTables: 3848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 71384 kB' 'Slab: 257192 kB' 'SReclaimable: 71384 kB' 'SUnreclaim: 185808 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.295 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44223620 kB' 'MemFree: 33619716 kB' 'MemUsed: 10603904 kB' 'SwapCached: 0 kB' 'Active: 5484664 kB' 'Inactive: 3414592 kB' 'Active(anon): 5308276 kB' 'Inactive(anon): 0 kB' 'Active(file): 176388 kB' 'Inactive(file): 3414592 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8642284 kB' 'Mapped: 104812 kB' 'AnonPages: 257024 kB' 'Shmem: 5051304 kB' 'KernelStack: 8168 kB' 'PageTables: 4640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 126616 kB' 'Slab: 299496 kB' 'SReclaimable: 126616 kB' 'SUnreclaim: 172880 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.296 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:23.297 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:23.298 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:23.298 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:23.298 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:23.298 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:23.298 node0=512 expecting 512 00:03:23.298 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:23.298 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:23.298 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:23.298 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:23.298 node1=512 expecting 512 00:03:23.298 16:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:23.298 00:03:23.298 real 0m5.127s 00:03:23.298 user 0m1.653s 00:03:23.298 sys 0m3.372s 00:03:23.298 16:14:08 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:23.298 16:14:08 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:23.298 ************************************ 00:03:23.298 END TEST even_2G_alloc 00:03:23.298 ************************************ 00:03:23.298 16:14:08 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:23.298 16:14:08 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:23.298 16:14:08 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:23.298 16:14:08 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:23.298 16:14:08 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:23.298 ************************************ 00:03:23.298 START TEST odd_alloc 00:03:23.298 ************************************ 00:03:23.298 16:14:08 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:23.298 16:14:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:23.298 16:14:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:23.298 16:14:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:23.298 16:14:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:23.298 16:14:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:23.298 16:14:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:23.298 16:14:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:23.298 16:14:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:23.298 16:14:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:23.298 16:14:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:23.298 16:14:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:23.298 16:14:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:23.298 16:14:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:23.298 16:14:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:23.298 16:14:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:23.298 16:14:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:23.298 16:14:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:23.298 16:14:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:23.298 16:14:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:23.298 16:14:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:23.298 16:14:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:23.298 16:14:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:23.298 16:14:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:23.298 16:14:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:23.298 16:14:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:23.298 16:14:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:23.298 16:14:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:23.298 16:14:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:26.583 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:26.583 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:26.583 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:26.583 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:26.583 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:26.583 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:26.583 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:26.583 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:26.583 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:26.583 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:26.583 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:26.583 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:26.583 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:26.583 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:26.583 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:26.583 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:26.583 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:28.485 16:14:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:28.485 16:14:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:28.485 16:14:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:28.485 16:14:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:28.485 16:14:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:28.485 16:14:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:28.485 16:14:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:28.485 16:14:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:28.485 16:14:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:28.485 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:28.485 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:28.485 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:28.485 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.485 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.485 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.485 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.485 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.485 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.485 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.485 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 75573072 kB' 'MemAvailable: 79051164 kB' 'Buffers: 4360 kB' 'Cached: 11430084 kB' 'SwapCached: 0 kB' 'Active: 8543556 kB' 'Inactive: 3529752 kB' 'Active(anon): 8040808 kB' 'Inactive(anon): 0 kB' 'Active(file): 502748 kB' 'Inactive(file): 3529752 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 641688 kB' 'Mapped: 184976 kB' 'Shmem: 7401944 kB' 'KReclaimable: 198000 kB' 'Slab: 556448 kB' 'SReclaimable: 198000 kB' 'SUnreclaim: 358448 kB' 'KernelStack: 16432 kB' 'PageTables: 8512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53485768 kB' 'Committed_AS: 9464856 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212072 kB' 'VmallocChunk: 0 kB' 'Percpu: 57600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:03:28.485 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.485 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.485 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.485 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.485 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.485 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.485 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.485 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.485 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.485 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.485 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.485 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.485 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.485 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.485 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.749 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.749 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.749 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.749 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.749 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.749 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.749 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.749 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.749 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.749 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.749 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.749 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.749 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.749 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.749 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.749 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.749 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.749 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.749 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.749 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.749 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.750 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 75573816 kB' 'MemAvailable: 79051908 kB' 'Buffers: 4360 kB' 'Cached: 11430088 kB' 'SwapCached: 0 kB' 'Active: 8542676 kB' 'Inactive: 3529752 kB' 'Active(anon): 8039928 kB' 'Inactive(anon): 0 kB' 'Active(file): 502748 kB' 'Inactive(file): 3529752 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 641228 kB' 'Mapped: 184860 kB' 'Shmem: 7401948 kB' 'KReclaimable: 198000 kB' 'Slab: 556448 kB' 'SReclaimable: 198000 kB' 'SUnreclaim: 358448 kB' 'KernelStack: 16416 kB' 'PageTables: 8460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53485768 kB' 'Committed_AS: 9464872 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212056 kB' 'VmallocChunk: 0 kB' 'Percpu: 57600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.751 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.752 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 75574320 kB' 'MemAvailable: 79052412 kB' 'Buffers: 4360 kB' 'Cached: 11430088 kB' 'SwapCached: 0 kB' 'Active: 8543180 kB' 'Inactive: 3529752 kB' 'Active(anon): 8040432 kB' 'Inactive(anon): 0 kB' 'Active(file): 502748 kB' 'Inactive(file): 3529752 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 641732 kB' 'Mapped: 184860 kB' 'Shmem: 7401948 kB' 'KReclaimable: 198000 kB' 'Slab: 556448 kB' 'SReclaimable: 198000 kB' 'SUnreclaim: 358448 kB' 'KernelStack: 16416 kB' 'PageTables: 8460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53485768 kB' 'Committed_AS: 9464892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212056 kB' 'VmallocChunk: 0 kB' 'Percpu: 57600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.753 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:28.754 nr_hugepages=1025 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:28.754 resv_hugepages=0 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:28.754 surplus_hugepages=0 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:28.754 anon_hugepages=0 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:28.754 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 75574588 kB' 'MemAvailable: 79052680 kB' 'Buffers: 4360 kB' 'Cached: 11430128 kB' 'SwapCached: 0 kB' 'Active: 8542888 kB' 'Inactive: 3529752 kB' 'Active(anon): 8040140 kB' 'Inactive(anon): 0 kB' 'Active(file): 502748 kB' 'Inactive(file): 3529752 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 641448 kB' 'Mapped: 184860 kB' 'Shmem: 7401988 kB' 'KReclaimable: 198000 kB' 'Slab: 556448 kB' 'SReclaimable: 198000 kB' 'SUnreclaim: 358448 kB' 'KernelStack: 16432 kB' 'PageTables: 8504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53485768 kB' 'Committed_AS: 9464912 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212056 kB' 'VmallocChunk: 0 kB' 'Percpu: 57600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.755 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48069912 kB' 'MemFree: 41956164 kB' 'MemUsed: 6113748 kB' 'SwapCached: 0 kB' 'Active: 3058476 kB' 'Inactive: 115160 kB' 'Active(anon): 2732116 kB' 'Inactive(anon): 0 kB' 'Active(file): 326360 kB' 'Inactive(file): 115160 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2792116 kB' 'Mapped: 79980 kB' 'AnonPages: 384684 kB' 'Shmem: 2350596 kB' 'KernelStack: 8280 kB' 'PageTables: 3860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 71384 kB' 'Slab: 257120 kB' 'SReclaimable: 71384 kB' 'SUnreclaim: 185736 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.756 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.757 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44223620 kB' 'MemFree: 33618136 kB' 'MemUsed: 10605484 kB' 'SwapCached: 0 kB' 'Active: 5484784 kB' 'Inactive: 3414592 kB' 'Active(anon): 5308396 kB' 'Inactive(anon): 0 kB' 'Active(file): 176388 kB' 'Inactive(file): 3414592 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8642412 kB' 'Mapped: 104880 kB' 'AnonPages: 257128 kB' 'Shmem: 5051432 kB' 'KernelStack: 8168 kB' 'PageTables: 4688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 126616 kB' 'Slab: 299328 kB' 'SReclaimable: 126616 kB' 'SUnreclaim: 172712 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.758 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:28.759 node0=512 expecting 513 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:28.759 node1=513 expecting 512 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:28.759 00:03:28.759 real 0m5.525s 00:03:28.759 user 0m1.800s 00:03:28.759 sys 0m3.549s 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:28.759 16:14:14 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:28.759 ************************************ 00:03:28.759 END TEST odd_alloc 00:03:28.759 ************************************ 00:03:28.759 16:14:14 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:28.759 16:14:14 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:28.759 16:14:14 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:28.759 16:14:14 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:28.759 16:14:14 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:28.759 ************************************ 00:03:28.759 START TEST custom_alloc 00:03:28.759 ************************************ 00:03:28.759 16:14:14 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:28.759 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:28.759 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:28.759 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:28.759 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:28.759 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:28.759 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:28.759 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:28.759 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:28.759 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:28.759 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:28.759 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:28.759 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:28.759 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:28.759 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:28.759 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:28.759 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:28.759 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:28.759 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:28.759 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:28.759 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:28.759 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:28.759 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:28.759 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:28.759 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:28.759 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:28.759 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:28.760 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:28.760 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:28.760 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:28.760 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:28.760 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:28.760 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:28.760 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:28.760 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:28.760 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:28.760 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:28.760 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:28.760 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:28.760 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:28.760 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:28.760 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:28.760 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:28.760 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:28.760 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:28.760 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:28.760 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:28.760 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:29.019 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:29.019 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:29.019 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:29.019 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:29.019 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:29.019 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:29.019 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:29.019 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:29.019 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:29.019 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:29.019 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:29.019 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:29.019 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:29.019 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:29.019 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:29.019 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:29.019 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:29.019 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:29.019 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:29.019 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:29.019 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:29.019 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:29.019 16:14:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:29.019 16:14:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:29.019 16:14:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:32.307 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:32.307 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:32.307 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:32.307 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:32.307 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:32.307 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:32.307 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:32.307 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:32.307 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:32.307 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:32.307 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:32.307 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:32.307 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:32.307 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:32.307 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:32.307 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:32.307 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:34.210 16:14:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:34.210 16:14:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:34.210 16:14:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:34.210 16:14:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:34.210 16:14:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:34.210 16:14:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:34.210 16:14:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:34.210 16:14:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:34.210 16:14:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:34.210 16:14:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:34.210 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:34.210 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:34.210 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:34.210 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.210 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.210 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.210 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.210 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.210 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.210 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.210 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.210 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 74527900 kB' 'MemAvailable: 78005992 kB' 'Buffers: 4360 kB' 'Cached: 11430272 kB' 'SwapCached: 0 kB' 'Active: 8543972 kB' 'Inactive: 3529752 kB' 'Active(anon): 8041224 kB' 'Inactive(anon): 0 kB' 'Active(file): 502748 kB' 'Inactive(file): 3529752 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 642444 kB' 'Mapped: 185060 kB' 'Shmem: 7402132 kB' 'KReclaimable: 198000 kB' 'Slab: 556180 kB' 'SReclaimable: 198000 kB' 'SUnreclaim: 358180 kB' 'KernelStack: 16480 kB' 'PageTables: 8652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52962504 kB' 'Committed_AS: 9466808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212104 kB' 'VmallocChunk: 0 kB' 'Percpu: 57600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:03:34.210 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.210 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.210 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.210 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.210 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.210 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.210 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.210 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.210 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.211 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 74529168 kB' 'MemAvailable: 78007260 kB' 'Buffers: 4360 kB' 'Cached: 11430292 kB' 'SwapCached: 0 kB' 'Active: 8543276 kB' 'Inactive: 3529752 kB' 'Active(anon): 8040528 kB' 'Inactive(anon): 0 kB' 'Active(file): 502748 kB' 'Inactive(file): 3529752 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 641704 kB' 'Mapped: 184932 kB' 'Shmem: 7402152 kB' 'KReclaimable: 198000 kB' 'Slab: 556176 kB' 'SReclaimable: 198000 kB' 'SUnreclaim: 358176 kB' 'KernelStack: 16368 kB' 'PageTables: 8340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52962504 kB' 'Committed_AS: 9466824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212040 kB' 'VmallocChunk: 0 kB' 'Percpu: 57600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.212 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:34.213 16:14:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:34.476 16:14:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:34.476 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:34.476 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:34.476 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:34.476 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.476 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.476 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.476 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.476 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.476 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 74537504 kB' 'MemAvailable: 78015596 kB' 'Buffers: 4360 kB' 'Cached: 11430296 kB' 'SwapCached: 0 kB' 'Active: 8543596 kB' 'Inactive: 3529752 kB' 'Active(anon): 8040848 kB' 'Inactive(anon): 0 kB' 'Active(file): 502748 kB' 'Inactive(file): 3529752 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 642016 kB' 'Mapped: 184932 kB' 'Shmem: 7402156 kB' 'KReclaimable: 198000 kB' 'Slab: 556176 kB' 'SReclaimable: 198000 kB' 'SUnreclaim: 358176 kB' 'KernelStack: 16384 kB' 'PageTables: 8184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52962504 kB' 'Committed_AS: 9468332 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212136 kB' 'VmallocChunk: 0 kB' 'Percpu: 57600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.477 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:34.478 nr_hugepages=1536 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:34.478 resv_hugepages=0 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:34.478 surplus_hugepages=0 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:34.478 anon_hugepages=0 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 74536828 kB' 'MemAvailable: 78014920 kB' 'Buffers: 4360 kB' 'Cached: 11430316 kB' 'SwapCached: 0 kB' 'Active: 8543548 kB' 'Inactive: 3529752 kB' 'Active(anon): 8040800 kB' 'Inactive(anon): 0 kB' 'Active(file): 502748 kB' 'Inactive(file): 3529752 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 641884 kB' 'Mapped: 184932 kB' 'Shmem: 7402176 kB' 'KReclaimable: 198000 kB' 'Slab: 556176 kB' 'SReclaimable: 198000 kB' 'SUnreclaim: 358176 kB' 'KernelStack: 16512 kB' 'PageTables: 8384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52962504 kB' 'Committed_AS: 9476164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212152 kB' 'VmallocChunk: 0 kB' 'Percpu: 57600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.478 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.479 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48069912 kB' 'MemFree: 41962816 kB' 'MemUsed: 6107096 kB' 'SwapCached: 0 kB' 'Active: 3057872 kB' 'Inactive: 115160 kB' 'Active(anon): 2731512 kB' 'Inactive(anon): 0 kB' 'Active(file): 326360 kB' 'Inactive(file): 115160 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2792176 kB' 'Mapped: 79988 kB' 'AnonPages: 383980 kB' 'Shmem: 2350656 kB' 'KernelStack: 8296 kB' 'PageTables: 3652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 71384 kB' 'Slab: 256488 kB' 'SReclaimable: 71384 kB' 'SUnreclaim: 185104 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.480 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:34.481 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44223620 kB' 'MemFree: 32573060 kB' 'MemUsed: 11650560 kB' 'SwapCached: 0 kB' 'Active: 5486180 kB' 'Inactive: 3414592 kB' 'Active(anon): 5309792 kB' 'Inactive(anon): 0 kB' 'Active(file): 176388 kB' 'Inactive(file): 3414592 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8642540 kB' 'Mapped: 104944 kB' 'AnonPages: 258392 kB' 'Shmem: 5051560 kB' 'KernelStack: 8152 kB' 'PageTables: 4652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 126616 kB' 'Slab: 299688 kB' 'SReclaimable: 126616 kB' 'SUnreclaim: 173072 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.482 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.483 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.483 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.483 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.483 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.483 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.483 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.483 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.483 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.483 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.483 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.483 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.483 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.483 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.483 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.483 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.483 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.483 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.483 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.483 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.483 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.483 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.483 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.483 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.483 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.483 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.483 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.483 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.483 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.483 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.483 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.483 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.483 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.483 16:14:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:34.483 16:14:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:34.483 16:14:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:34.483 16:14:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:34.483 16:14:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:34.483 16:14:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:34.483 node0=512 expecting 512 00:03:34.483 16:14:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:34.483 16:14:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:34.483 16:14:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:34.483 16:14:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:34.483 node1=1024 expecting 1024 00:03:34.483 16:14:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:34.483 00:03:34.483 real 0m5.592s 00:03:34.483 user 0m2.015s 00:03:34.483 sys 0m3.529s 00:03:34.483 16:14:19 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:34.483 16:14:19 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:34.483 ************************************ 00:03:34.483 END TEST custom_alloc 00:03:34.483 ************************************ 00:03:34.483 16:14:19 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:34.483 16:14:19 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:34.483 16:14:19 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:34.483 16:14:19 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:34.483 16:14:19 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:34.483 ************************************ 00:03:34.483 START TEST no_shrink_alloc 00:03:34.483 ************************************ 00:03:34.483 16:14:19 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:34.483 16:14:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:34.483 16:14:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:34.483 16:14:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:34.483 16:14:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:34.483 16:14:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:34.483 16:14:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:34.483 16:14:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:34.483 16:14:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:34.483 16:14:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:34.483 16:14:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:34.483 16:14:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:34.483 16:14:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:34.483 16:14:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:34.483 16:14:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:34.483 16:14:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:34.483 16:14:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:34.483 16:14:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:34.483 16:14:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:34.483 16:14:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:34.483 16:14:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:34.483 16:14:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:34.483 16:14:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:37.771 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:37.771 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:37.771 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:37.771 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:37.771 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:37.771 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:37.771 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:37.771 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:37.771 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:37.771 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:37.771 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:37.771 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:37.771 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:37.771 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:37.771 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:37.771 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:37.771 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:39.670 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:39.670 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:39.670 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:39.670 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:39.670 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 75593756 kB' 'MemAvailable: 79071848 kB' 'Buffers: 4360 kB' 'Cached: 11430456 kB' 'SwapCached: 0 kB' 'Active: 8545140 kB' 'Inactive: 3529752 kB' 'Active(anon): 8042392 kB' 'Inactive(anon): 0 kB' 'Active(file): 502748 kB' 'Inactive(file): 3529752 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 643436 kB' 'Mapped: 185312 kB' 'Shmem: 7402316 kB' 'KReclaimable: 198000 kB' 'Slab: 556560 kB' 'SReclaimable: 198000 kB' 'SUnreclaim: 358560 kB' 'KernelStack: 16448 kB' 'PageTables: 8544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 9467792 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212200 kB' 'VmallocChunk: 0 kB' 'Percpu: 57600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.671 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.672 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 75595580 kB' 'MemAvailable: 79073672 kB' 'Buffers: 4360 kB' 'Cached: 11430460 kB' 'SwapCached: 0 kB' 'Active: 8545348 kB' 'Inactive: 3529752 kB' 'Active(anon): 8042600 kB' 'Inactive(anon): 0 kB' 'Active(file): 502748 kB' 'Inactive(file): 3529752 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 643552 kB' 'Mapped: 184996 kB' 'Shmem: 7402320 kB' 'KReclaimable: 198000 kB' 'Slab: 556540 kB' 'SReclaimable: 198000 kB' 'SUnreclaim: 358540 kB' 'KernelStack: 16400 kB' 'PageTables: 8436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 9467812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212120 kB' 'VmallocChunk: 0 kB' 'Percpu: 57600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.673 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.674 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 75595620 kB' 'MemAvailable: 79073712 kB' 'Buffers: 4360 kB' 'Cached: 11430476 kB' 'SwapCached: 0 kB' 'Active: 8545328 kB' 'Inactive: 3529752 kB' 'Active(anon): 8042580 kB' 'Inactive(anon): 0 kB' 'Active(file): 502748 kB' 'Inactive(file): 3529752 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 643536 kB' 'Mapped: 184996 kB' 'Shmem: 7402336 kB' 'KReclaimable: 198000 kB' 'Slab: 556540 kB' 'SReclaimable: 198000 kB' 'SUnreclaim: 358540 kB' 'KernelStack: 16576 kB' 'PageTables: 8712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 9469320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212216 kB' 'VmallocChunk: 0 kB' 'Percpu: 57600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.675 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.676 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:39.677 nr_hugepages=1024 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:39.677 resv_hugepages=0 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:39.677 surplus_hugepages=0 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:39.677 anon_hugepages=0 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 75592888 kB' 'MemAvailable: 79070980 kB' 'Buffers: 4360 kB' 'Cached: 11430500 kB' 'SwapCached: 0 kB' 'Active: 8545104 kB' 'Inactive: 3529752 kB' 'Active(anon): 8042356 kB' 'Inactive(anon): 0 kB' 'Active(file): 502748 kB' 'Inactive(file): 3529752 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 643224 kB' 'Mapped: 184996 kB' 'Shmem: 7402360 kB' 'KReclaimable: 198000 kB' 'Slab: 556540 kB' 'SReclaimable: 198000 kB' 'SUnreclaim: 358540 kB' 'KernelStack: 16608 kB' 'PageTables: 8744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 9469344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212248 kB' 'VmallocChunk: 0 kB' 'Percpu: 57600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.677 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.678 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.678 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.678 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.678 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.678 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.678 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.678 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.678 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.678 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.678 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.678 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.678 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.678 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.678 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.678 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.678 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.678 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.678 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.678 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.937 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.937 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.937 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.937 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.937 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.937 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.937 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.937 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.937 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.937 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.937 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.937 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.937 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.937 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.937 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.937 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.937 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.937 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.937 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.937 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.937 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.937 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.937 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.937 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.937 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.937 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.937 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.937 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.937 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.937 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.937 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.937 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.937 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.937 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.937 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.937 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.937 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.937 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.937 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.937 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.937 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.937 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.937 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.937 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.937 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.937 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.937 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.937 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.937 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.937 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.937 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.937 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.937 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.937 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.937 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.937 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.937 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.937 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.937 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.937 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.937 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.937 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.937 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.938 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48069912 kB' 'MemFree: 40923320 kB' 'MemUsed: 7146592 kB' 'SwapCached: 0 kB' 'Active: 3057196 kB' 'Inactive: 115160 kB' 'Active(anon): 2730836 kB' 'Inactive(anon): 0 kB' 'Active(file): 326360 kB' 'Inactive(file): 115160 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2792200 kB' 'Mapped: 79992 kB' 'AnonPages: 383228 kB' 'Shmem: 2350680 kB' 'KernelStack: 8216 kB' 'PageTables: 3660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 71384 kB' 'Slab: 256556 kB' 'SReclaimable: 71384 kB' 'SUnreclaim: 185172 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.939 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.940 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.940 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.940 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.940 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.940 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.940 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.940 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.940 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.940 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.940 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.940 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.940 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.940 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.940 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.940 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.940 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.940 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.940 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.940 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:39.940 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.940 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.940 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.940 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:39.940 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:39.940 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:39.940 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:39.940 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:39.940 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:39.940 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:39.940 node0=1024 expecting 1024 00:03:39.940 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:39.940 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:39.940 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:39.940 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:39.940 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:39.940 16:14:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:43.220 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:43.220 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:43.220 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:43.220 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:43.220 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:43.220 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:43.220 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:43.220 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:43.220 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:43.220 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:43.220 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:43.220 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:43.220 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:43.220 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:43.220 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:43.220 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:43.220 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:45.126 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:45.126 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:45.126 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:45.126 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:45.126 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:45.126 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:45.126 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:45.126 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:45.126 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:45.126 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:45.126 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:45.126 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:45.126 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:45.126 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.126 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.126 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.126 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.126 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.126 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.126 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.126 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.126 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 75557760 kB' 'MemAvailable: 79035852 kB' 'Buffers: 4360 kB' 'Cached: 11430628 kB' 'SwapCached: 0 kB' 'Active: 8546316 kB' 'Inactive: 3529752 kB' 'Active(anon): 8043568 kB' 'Inactive(anon): 0 kB' 'Active(file): 502748 kB' 'Inactive(file): 3529752 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 644472 kB' 'Mapped: 185144 kB' 'Shmem: 7402488 kB' 'KReclaimable: 198000 kB' 'Slab: 556224 kB' 'SReclaimable: 198000 kB' 'SUnreclaim: 358224 kB' 'KernelStack: 16656 kB' 'PageTables: 9060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 9470168 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212184 kB' 'VmallocChunk: 0 kB' 'Percpu: 57600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:03:45.126 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.126 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.126 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.126 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.126 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.126 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.126 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.126 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.126 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.126 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.126 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.126 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.126 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.126 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.126 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.126 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.126 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.126 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.126 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.126 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.127 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 75558924 kB' 'MemAvailable: 79037016 kB' 'Buffers: 4360 kB' 'Cached: 11430632 kB' 'SwapCached: 0 kB' 'Active: 8546580 kB' 'Inactive: 3529752 kB' 'Active(anon): 8043832 kB' 'Inactive(anon): 0 kB' 'Active(file): 502748 kB' 'Inactive(file): 3529752 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 645240 kB' 'Mapped: 185068 kB' 'Shmem: 7402492 kB' 'KReclaimable: 198000 kB' 'Slab: 556288 kB' 'SReclaimable: 198000 kB' 'SUnreclaim: 358288 kB' 'KernelStack: 16656 kB' 'PageTables: 8880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 9470184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212312 kB' 'VmallocChunk: 0 kB' 'Percpu: 57600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.128 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.129 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 75556688 kB' 'MemAvailable: 79034780 kB' 'Buffers: 4360 kB' 'Cached: 11430648 kB' 'SwapCached: 0 kB' 'Active: 8547856 kB' 'Inactive: 3529752 kB' 'Active(anon): 8045108 kB' 'Inactive(anon): 0 kB' 'Active(file): 502748 kB' 'Inactive(file): 3529752 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 646032 kB' 'Mapped: 185068 kB' 'Shmem: 7402508 kB' 'KReclaimable: 198000 kB' 'Slab: 556192 kB' 'SReclaimable: 198000 kB' 'SUnreclaim: 358192 kB' 'KernelStack: 17040 kB' 'PageTables: 10452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 9469092 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212264 kB' 'VmallocChunk: 0 kB' 'Percpu: 57600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.155 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.156 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:45.157 nr_hugepages=1024 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:45.157 resv_hugepages=0 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:45.157 surplus_hugepages=0 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:45.157 anon_hugepages=0 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 75554276 kB' 'MemAvailable: 79032368 kB' 'Buffers: 4360 kB' 'Cached: 11430668 kB' 'SwapCached: 0 kB' 'Active: 8548244 kB' 'Inactive: 3529752 kB' 'Active(anon): 8045496 kB' 'Inactive(anon): 0 kB' 'Active(file): 502748 kB' 'Inactive(file): 3529752 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 646408 kB' 'Mapped: 185068 kB' 'Shmem: 7402528 kB' 'KReclaimable: 198000 kB' 'Slab: 556192 kB' 'SReclaimable: 198000 kB' 'SUnreclaim: 358192 kB' 'KernelStack: 17168 kB' 'PageTables: 10756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 9470232 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212312 kB' 'VmallocChunk: 0 kB' 'Percpu: 57600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.157 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.158 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48069912 kB' 'MemFree: 40916900 kB' 'MemUsed: 7153012 kB' 'SwapCached: 0 kB' 'Active: 3059276 kB' 'Inactive: 115160 kB' 'Active(anon): 2732916 kB' 'Inactive(anon): 0 kB' 'Active(file): 326360 kB' 'Inactive(file): 115160 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2792228 kB' 'Mapped: 80000 kB' 'AnonPages: 385348 kB' 'Shmem: 2350708 kB' 'KernelStack: 8680 kB' 'PageTables: 5676 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 71384 kB' 'Slab: 256364 kB' 'SReclaimable: 71384 kB' 'SUnreclaim: 184980 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.159 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.160 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.160 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.160 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.160 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.160 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.160 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.160 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.160 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.160 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.160 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.160 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.160 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.160 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.160 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.160 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.160 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.160 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.160 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.160 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.160 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.160 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.160 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.160 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.160 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.160 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.160 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.160 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.160 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.160 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.160 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.160 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.160 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.160 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.160 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.160 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.160 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.160 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.160 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.160 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.160 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.160 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.160 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.160 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.160 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.160 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.160 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.160 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.160 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.160 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.160 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.160 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.160 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.160 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.160 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.160 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.160 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.160 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.160 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.160 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.160 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.160 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:45.160 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:45.160 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:45.160 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:45.160 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:45.160 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:45.160 node0=1024 expecting 1024 00:03:45.160 16:14:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:45.160 00:03:45.160 real 0m10.704s 00:03:45.160 user 0m3.781s 00:03:45.160 sys 0m7.021s 00:03:45.160 16:14:30 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:45.160 16:14:30 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:45.160 ************************************ 00:03:45.160 END TEST no_shrink_alloc 00:03:45.160 ************************************ 00:03:45.419 16:14:30 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:45.419 16:14:30 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:45.419 16:14:30 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:45.419 16:14:30 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:45.419 16:14:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:45.419 16:14:30 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:45.419 16:14:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:45.419 16:14:30 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:45.419 16:14:30 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:45.419 16:14:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:45.419 16:14:30 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:45.419 16:14:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:45.419 16:14:30 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:45.419 16:14:30 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:45.419 16:14:30 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:45.419 00:03:45.419 real 0m41.707s 00:03:45.419 user 0m13.518s 00:03:45.419 sys 0m25.027s 00:03:45.419 16:14:30 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:45.419 16:14:30 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:45.419 ************************************ 00:03:45.419 END TEST hugepages 00:03:45.419 ************************************ 00:03:45.419 16:14:30 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:45.419 16:14:30 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:03:45.419 16:14:30 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:45.419 16:14:30 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:45.419 16:14:30 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:45.419 ************************************ 00:03:45.419 START TEST driver 00:03:45.419 ************************************ 00:03:45.419 16:14:30 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:03:45.419 * Looking for test storage... 00:03:45.419 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:45.419 16:14:30 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:45.419 16:14:30 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:45.419 16:14:30 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:51.981 16:14:37 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:51.981 16:14:37 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:51.981 16:14:37 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:51.981 16:14:37 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:51.981 ************************************ 00:03:51.981 START TEST guess_driver 00:03:51.981 ************************************ 00:03:51.981 16:14:37 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:03:51.981 16:14:37 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:51.981 16:14:37 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:51.981 16:14:37 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:51.981 16:14:37 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:51.982 16:14:37 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:51.982 16:14:37 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:51.982 16:14:37 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:51.982 16:14:37 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:51.982 16:14:37 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:51.982 16:14:37 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 190 > 0 )) 00:03:51.982 16:14:37 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:51.982 16:14:37 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:51.982 16:14:37 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:51.982 16:14:37 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:51.982 16:14:37 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:51.982 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:51.982 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:51.982 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:51.982 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:51.982 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:51.982 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:51.982 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:51.982 16:14:37 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:51.982 16:14:37 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:51.982 16:14:37 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:51.982 16:14:37 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:51.982 16:14:37 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:51.982 Looking for driver=vfio-pci 00:03:51.982 16:14:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:51.982 16:14:37 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:51.982 16:14:37 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:51.982 16:14:37 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:55.265 16:14:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:55.265 16:14:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:55.265 16:14:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:55.265 16:14:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:55.265 16:14:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:55.265 16:14:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:55.265 16:14:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:55.265 16:14:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:55.265 16:14:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:55.265 16:14:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:55.265 16:14:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:55.265 16:14:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:55.265 16:14:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:55.265 16:14:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:55.265 16:14:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:55.265 16:14:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:55.265 16:14:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:55.265 16:14:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:55.265 16:14:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:55.265 16:14:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:55.265 16:14:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:55.265 16:14:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:55.265 16:14:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:55.265 16:14:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:55.265 16:14:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:55.265 16:14:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:55.265 16:14:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:55.266 16:14:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:55.266 16:14:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:55.266 16:14:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:55.266 16:14:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:55.266 16:14:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:55.266 16:14:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:55.266 16:14:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:55.266 16:14:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:55.266 16:14:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:55.266 16:14:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:55.266 16:14:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:55.266 16:14:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:55.266 16:14:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:55.266 16:14:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:55.266 16:14:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:55.266 16:14:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:55.266 16:14:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:55.266 16:14:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:55.266 16:14:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:55.266 16:14:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:55.266 16:14:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:58.564 16:14:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:58.564 16:14:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:58.564 16:14:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.460 16:14:45 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:00.460 16:14:45 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:00.460 16:14:45 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:00.460 16:14:45 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:07.019 00:04:07.019 real 0m15.044s 00:04:07.019 user 0m3.834s 00:04:07.019 sys 0m7.378s 00:04:07.019 16:14:52 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:07.019 16:14:52 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:07.019 ************************************ 00:04:07.019 END TEST guess_driver 00:04:07.019 ************************************ 00:04:07.019 16:14:52 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:07.019 00:04:07.019 real 0m21.445s 00:04:07.019 user 0m5.651s 00:04:07.019 sys 0m11.137s 00:04:07.019 16:14:52 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:07.019 16:14:52 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:07.019 ************************************ 00:04:07.019 END TEST driver 00:04:07.019 ************************************ 00:04:07.019 16:14:52 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:07.019 16:14:52 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:04:07.019 16:14:52 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:07.019 16:14:52 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.019 16:14:52 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:07.019 ************************************ 00:04:07.019 START TEST devices 00:04:07.019 ************************************ 00:04:07.019 16:14:52 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:04:07.019 * Looking for test storage... 00:04:07.019 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:04:07.019 16:14:52 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:07.019 16:14:52 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:07.019 16:14:52 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:07.019 16:14:52 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:12.287 16:14:57 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:12.288 16:14:57 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:12.288 16:14:57 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:12.288 16:14:57 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:12.288 16:14:57 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:12.288 16:14:57 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:12.288 16:14:57 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:12.288 16:14:57 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:12.288 16:14:57 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:12.288 16:14:57 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:12.288 16:14:57 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:12.288 16:14:57 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:12.288 16:14:57 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:12.288 16:14:57 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:12.288 16:14:57 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:12.288 16:14:57 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:12.288 16:14:57 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:12.288 16:14:57 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:1a:00.0 00:04:12.288 16:14:57 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\1\a\:\0\0\.\0* ]] 00:04:12.288 16:14:57 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:12.288 16:14:57 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:12.288 16:14:57 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:12.288 No valid GPT data, bailing 00:04:12.288 16:14:57 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:12.288 16:14:57 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:12.288 16:14:57 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:12.288 16:14:57 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:12.288 16:14:57 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:12.288 16:14:57 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:12.288 16:14:57 setup.sh.devices -- setup/common.sh@80 -- # echo 4000787030016 00:04:12.288 16:14:57 setup.sh.devices -- setup/devices.sh@204 -- # (( 4000787030016 >= min_disk_size )) 00:04:12.288 16:14:57 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:12.288 16:14:57 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:1a:00.0 00:04:12.288 16:14:57 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:12.288 16:14:57 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:12.288 16:14:57 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:12.288 16:14:57 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:12.288 16:14:57 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:12.288 16:14:57 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:12.547 ************************************ 00:04:12.547 START TEST nvme_mount 00:04:12.547 ************************************ 00:04:12.547 16:14:57 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:12.547 16:14:57 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:12.547 16:14:57 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:12.547 16:14:57 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:12.547 16:14:57 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:12.547 16:14:57 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:12.547 16:14:57 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:12.547 16:14:57 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:12.547 16:14:57 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:12.547 16:14:57 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:12.547 16:14:57 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:12.547 16:14:57 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:12.547 16:14:57 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:12.547 16:14:57 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:12.547 16:14:57 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:12.547 16:14:57 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:12.547 16:14:57 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:12.547 16:14:57 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:12.547 16:14:57 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:12.547 16:14:57 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:13.483 Creating new GPT entries in memory. 00:04:13.484 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:13.484 other utilities. 00:04:13.484 16:14:58 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:13.484 16:14:58 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:13.484 16:14:58 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:13.484 16:14:58 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:13.484 16:14:58 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:14.450 Creating new GPT entries in memory. 00:04:14.450 The operation has completed successfully. 00:04:14.450 16:14:59 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:14.450 16:14:59 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:14.450 16:14:59 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1480489 00:04:14.450 16:14:59 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:14.450 16:14:59 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:14.450 16:14:59 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:14.450 16:14:59 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:14.450 16:14:59 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:14.450 16:14:59 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:14.736 16:15:00 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:1a:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:14.736 16:15:00 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:04:14.736 16:15:00 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:14.736 16:15:00 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:14.736 16:15:00 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:14.736 16:15:00 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:14.737 16:15:00 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:14.737 16:15:00 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:14.737 16:15:00 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:14.737 16:15:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.737 16:15:00 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:04:14.737 16:15:00 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:14.737 16:15:00 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:14.737 16:15:00 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:18.026 16:15:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:18.026 16:15:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:18.026 16:15:03 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:18.026 16:15:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.026 16:15:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:18.026 16:15:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.026 16:15:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:18.026 16:15:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.026 16:15:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:18.026 16:15:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.026 16:15:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:18.026 16:15:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.026 16:15:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:18.026 16:15:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.026 16:15:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:18.026 16:15:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.026 16:15:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:18.026 16:15:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.026 16:15:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:18.026 16:15:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.026 16:15:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:18.026 16:15:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.026 16:15:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:18.026 16:15:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.026 16:15:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:18.026 16:15:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.026 16:15:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:18.026 16:15:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.026 16:15:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:18.026 16:15:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.026 16:15:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:18.026 16:15:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.026 16:15:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:18.026 16:15:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.026 16:15:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:18.026 16:15:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.930 16:15:05 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:19.930 16:15:05 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:19.930 16:15:05 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:19.930 16:15:05 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:19.930 16:15:05 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:19.930 16:15:05 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:19.930 16:15:05 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:19.930 16:15:05 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:19.930 16:15:05 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:19.930 16:15:05 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:19.930 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:19.930 16:15:05 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:19.930 16:15:05 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:20.189 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:20.189 /dev/nvme0n1: 8 bytes were erased at offset 0x3a3817d5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:20.189 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:20.189 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:20.189 16:15:05 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:20.189 16:15:05 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:20.189 16:15:05 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:20.189 16:15:05 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:20.189 16:15:05 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:20.447 16:15:05 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:20.447 16:15:05 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:1a:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:20.447 16:15:05 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:04:20.447 16:15:05 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:20.447 16:15:05 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:20.447 16:15:05 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:20.447 16:15:05 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:20.447 16:15:05 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:20.447 16:15:05 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:20.447 16:15:05 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:20.447 16:15:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.447 16:15:05 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:04:20.447 16:15:05 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:20.447 16:15:05 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:20.447 16:15:05 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:23.727 16:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:23.727 16:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:23.727 16:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:23.727 16:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.727 16:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:23.727 16:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.727 16:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:23.727 16:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.727 16:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:23.727 16:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.727 16:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:23.727 16:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.727 16:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:23.727 16:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.727 16:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:23.727 16:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.727 16:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:23.727 16:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.727 16:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:23.727 16:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.727 16:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:23.727 16:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.727 16:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:23.727 16:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.727 16:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:23.727 16:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.727 16:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:23.727 16:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.727 16:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:23.727 16:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.727 16:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:23.727 16:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.727 16:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:23.727 16:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.727 16:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:23.727 16:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.102 16:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:25.102 16:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:25.102 16:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:25.102 16:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:25.102 16:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:25.102 16:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:25.102 16:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:1a:00.0 data@nvme0n1 '' '' 00:04:25.102 16:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:04:25.102 16:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:25.102 16:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:25.102 16:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:25.102 16:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:25.102 16:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:25.102 16:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:25.102 16:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:04:25.102 16:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.102 16:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:25.102 16:15:10 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:25.102 16:15:10 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:28.388 16:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:28.388 16:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:28.388 16:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:28.388 16:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.388 16:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:28.388 16:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.388 16:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:28.388 16:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.388 16:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:28.388 16:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.388 16:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:28.388 16:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.388 16:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:28.388 16:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.388 16:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:28.388 16:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.388 16:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:28.388 16:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.388 16:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:28.388 16:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.388 16:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:28.388 16:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.388 16:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:28.388 16:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.388 16:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:28.388 16:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.388 16:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:28.388 16:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.388 16:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:28.388 16:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.388 16:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:28.388 16:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.388 16:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:28.388 16:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.388 16:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:28.388 16:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.287 16:15:15 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:30.287 16:15:15 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:30.287 16:15:15 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:30.287 16:15:15 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:30.287 16:15:15 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:30.287 16:15:15 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:30.287 16:15:15 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:30.287 16:15:15 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:30.287 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:30.287 00:04:30.287 real 0m17.955s 00:04:30.287 user 0m5.074s 00:04:30.287 sys 0m10.556s 00:04:30.287 16:15:15 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:30.287 16:15:15 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:30.287 ************************************ 00:04:30.287 END TEST nvme_mount 00:04:30.287 ************************************ 00:04:30.546 16:15:15 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:30.546 16:15:15 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:30.546 16:15:15 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:30.546 16:15:15 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.546 16:15:15 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:30.546 ************************************ 00:04:30.546 START TEST dm_mount 00:04:30.546 ************************************ 00:04:30.546 16:15:15 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:30.546 16:15:15 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:30.546 16:15:15 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:30.546 16:15:15 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:30.546 16:15:15 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:30.546 16:15:15 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:30.546 16:15:15 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:30.546 16:15:15 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:30.546 16:15:15 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:30.546 16:15:15 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:30.546 16:15:15 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:30.546 16:15:15 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:30.546 16:15:15 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:30.546 16:15:15 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:30.546 16:15:15 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:30.546 16:15:15 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:30.546 16:15:15 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:30.546 16:15:15 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:30.546 16:15:15 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:30.546 16:15:15 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:30.546 16:15:15 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:30.546 16:15:15 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:31.479 Creating new GPT entries in memory. 00:04:31.479 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:31.479 other utilities. 00:04:31.479 16:15:16 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:31.479 16:15:16 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:31.479 16:15:16 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:31.479 16:15:16 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:31.479 16:15:16 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:32.421 Creating new GPT entries in memory. 00:04:32.421 The operation has completed successfully. 00:04:32.421 16:15:17 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:32.421 16:15:17 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:32.421 16:15:17 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:32.421 16:15:17 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:32.421 16:15:17 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:33.797 The operation has completed successfully. 00:04:33.797 16:15:18 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:33.797 16:15:18 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:33.797 16:15:18 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1485747 00:04:33.797 16:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:33.797 16:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:33.797 16:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:33.797 16:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:33.797 16:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:33.797 16:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:33.797 16:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:33.797 16:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:33.797 16:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:33.797 16:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:33.797 16:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:33.797 16:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:33.797 16:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:33.797 16:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:33.797 16:15:19 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount size= 00:04:33.797 16:15:19 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:33.797 16:15:19 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:33.797 16:15:19 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:33.797 16:15:19 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:33.797 16:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:1a:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:33.797 16:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:04:33.797 16:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:33.797 16:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:33.797 16:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:33.797 16:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:33.797 16:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:33.797 16:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:33.797 16:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:33.797 16:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.797 16:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:04:33.797 16:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:33.797 16:15:19 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:33.797 16:15:19 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:36.324 16:15:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:36.324 16:15:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:36.324 16:15:21 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:36.324 16:15:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.324 16:15:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:36.324 16:15:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.324 16:15:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:36.324 16:15:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.324 16:15:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:36.324 16:15:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.324 16:15:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:36.324 16:15:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.324 16:15:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:36.324 16:15:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.324 16:15:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:36.324 16:15:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.324 16:15:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:36.324 16:15:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.324 16:15:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:36.324 16:15:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.324 16:15:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:36.324 16:15:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.324 16:15:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:36.324 16:15:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.324 16:15:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:36.324 16:15:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.324 16:15:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:36.324 16:15:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.324 16:15:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:36.324 16:15:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.324 16:15:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:36.324 16:15:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.324 16:15:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:36.324 16:15:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.324 16:15:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:36.324 16:15:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.223 16:15:23 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:38.223 16:15:23 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:38.223 16:15:23 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:38.223 16:15:23 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:38.223 16:15:23 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:38.223 16:15:23 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:38.223 16:15:23 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:1a:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:38.223 16:15:23 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:04:38.223 16:15:23 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:38.223 16:15:23 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:38.223 16:15:23 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:38.223 16:15:23 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:38.223 16:15:23 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:38.223 16:15:23 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:38.223 16:15:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.223 16:15:23 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:04:38.223 16:15:23 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:38.223 16:15:23 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:38.223 16:15:23 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:41.507 16:15:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:41.507 16:15:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:41.507 16:15:26 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:41.507 16:15:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.507 16:15:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:41.507 16:15:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.507 16:15:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:41.507 16:15:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.507 16:15:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:41.507 16:15:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.507 16:15:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:41.507 16:15:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.507 16:15:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:41.507 16:15:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.507 16:15:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:41.507 16:15:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.507 16:15:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:41.507 16:15:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.507 16:15:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:41.507 16:15:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.507 16:15:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:41.507 16:15:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.507 16:15:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:41.507 16:15:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.507 16:15:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:41.507 16:15:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.507 16:15:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:41.507 16:15:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.507 16:15:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:41.507 16:15:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.507 16:15:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:41.507 16:15:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.507 16:15:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:41.507 16:15:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.507 16:15:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:41.507 16:15:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.405 16:15:28 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:43.405 16:15:28 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:43.405 16:15:28 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:43.405 16:15:28 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:43.405 16:15:28 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:43.405 16:15:28 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:43.405 16:15:28 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:43.405 16:15:28 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:43.405 16:15:28 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:43.405 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:43.405 16:15:28 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:43.405 16:15:28 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:43.405 00:04:43.405 real 0m12.949s 00:04:43.405 user 0m2.888s 00:04:43.405 sys 0m6.846s 00:04:43.405 16:15:28 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:43.405 16:15:28 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:43.405 ************************************ 00:04:43.405 END TEST dm_mount 00:04:43.405 ************************************ 00:04:43.405 16:15:28 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:43.405 16:15:28 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:43.405 16:15:28 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:43.405 16:15:28 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:43.405 16:15:28 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:43.405 16:15:28 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:43.405 16:15:28 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:43.405 16:15:28 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:43.664 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:43.664 /dev/nvme0n1: 8 bytes were erased at offset 0x3a3817d5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:43.664 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:43.664 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:43.664 16:15:29 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:43.664 16:15:29 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:43.664 16:15:29 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:43.664 16:15:29 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:43.664 16:15:29 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:43.664 16:15:29 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:43.664 16:15:29 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:43.664 00:04:43.664 real 0m36.845s 00:04:43.664 user 0m9.757s 00:04:43.664 sys 0m21.304s 00:04:43.664 16:15:29 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:43.664 16:15:29 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:43.664 ************************************ 00:04:43.664 END TEST devices 00:04:43.664 ************************************ 00:04:43.664 16:15:29 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:43.664 00:04:43.664 real 2m16.040s 00:04:43.664 user 0m39.688s 00:04:43.664 sys 1m18.979s 00:04:43.664 16:15:29 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:43.664 16:15:29 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:43.664 ************************************ 00:04:43.664 END TEST setup.sh 00:04:43.664 ************************************ 00:04:43.923 16:15:29 -- common/autotest_common.sh@1142 -- # return 0 00:04:43.923 16:15:29 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:04:47.208 Hugepages 00:04:47.208 node hugesize free / total 00:04:47.208 node0 1048576kB 0 / 0 00:04:47.208 node0 2048kB 2048 / 2048 00:04:47.208 node1 1048576kB 0 / 0 00:04:47.208 node1 2048kB 0 / 0 00:04:47.208 00:04:47.208 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:47.208 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:47.208 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:47.208 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:47.208 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:47.208 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:47.208 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:47.208 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:47.208 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:47.208 NVMe 0000:1a:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:47.208 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:47.208 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:47.208 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:47.208 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:47.208 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:47.208 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:47.208 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:47.208 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:47.208 16:15:32 -- spdk/autotest.sh@130 -- # uname -s 00:04:47.208 16:15:32 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:47.208 16:15:32 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:47.208 16:15:32 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:51.400 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:51.400 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:51.400 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:51.400 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:51.400 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:51.400 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:51.400 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:51.400 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:51.400 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:51.400 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:51.400 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:51.400 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:51.400 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:51.400 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:51.400 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:51.400 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:53.933 0000:1a:00.0 (8086 0a54): nvme -> vfio-pci 00:04:55.832 16:15:41 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:57.210 16:15:42 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:57.210 16:15:42 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:57.210 16:15:42 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:57.210 16:15:42 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:57.210 16:15:42 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:57.210 16:15:42 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:57.210 16:15:42 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:57.210 16:15:42 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:57.210 16:15:42 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:57.210 16:15:42 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:57.210 16:15:42 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:1a:00.0 00:04:57.210 16:15:42 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:05:00.496 Waiting for block devices as requested 00:05:00.496 0000:1a:00.0 (8086 0a54): vfio-pci -> nvme 00:05:00.496 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:00.496 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:00.496 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:00.496 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:00.496 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:00.496 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:00.496 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:00.755 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:00.755 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:00.755 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:01.014 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:01.014 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:01.014 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:01.272 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:01.272 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:01.272 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:03.189 16:15:48 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:03.189 16:15:48 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:1a:00.0 00:05:03.189 16:15:48 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:05:03.189 16:15:48 -- common/autotest_common.sh@1502 -- # grep 0000:1a:00.0/nvme/nvme 00:05:03.189 16:15:48 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0/0000:19:00.0/0000:1a:00.0/nvme/nvme0 00:05:03.189 16:15:48 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0/0000:19:00.0/0000:1a:00.0/nvme/nvme0 ]] 00:05:03.189 16:15:48 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0/0000:19:00.0/0000:1a:00.0/nvme/nvme0 00:05:03.445 16:15:48 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:03.445 16:15:48 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:03.445 16:15:48 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:03.445 16:15:48 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:03.445 16:15:48 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:03.445 16:15:48 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:03.445 16:15:48 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:05:03.445 16:15:48 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:03.445 16:15:48 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:03.445 16:15:48 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:03.445 16:15:48 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:03.445 16:15:48 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:03.445 16:15:48 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:03.445 16:15:48 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:03.445 16:15:48 -- common/autotest_common.sh@1557 -- # continue 00:05:03.445 16:15:48 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:03.445 16:15:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:03.445 16:15:48 -- common/autotest_common.sh@10 -- # set +x 00:05:03.445 16:15:48 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:03.445 16:15:48 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:03.445 16:15:48 -- common/autotest_common.sh@10 -- # set +x 00:05:03.445 16:15:48 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:05:06.724 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:06.724 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:06.724 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:06.724 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:06.724 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:06.724 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:06.724 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:06.724 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:06.724 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:06.724 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:06.724 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:06.724 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:06.724 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:06.724 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:06.724 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:06.724 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:10.007 0000:1a:00.0 (8086 0a54): nvme -> vfio-pci 00:05:11.991 16:15:57 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:11.991 16:15:57 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:11.991 16:15:57 -- common/autotest_common.sh@10 -- # set +x 00:05:11.991 16:15:57 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:11.991 16:15:57 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:11.991 16:15:57 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:11.991 16:15:57 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:11.991 16:15:57 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:11.991 16:15:57 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:11.991 16:15:57 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:11.991 16:15:57 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:11.991 16:15:57 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:11.991 16:15:57 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:11.991 16:15:57 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:11.991 16:15:57 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:11.991 16:15:57 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:1a:00.0 00:05:11.991 16:15:57 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:11.991 16:15:57 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:1a:00.0/device 00:05:11.991 16:15:57 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:05:11.991 16:15:57 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:11.991 16:15:57 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:05:11.991 16:15:57 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:1a:00.0 00:05:11.991 16:15:57 -- common/autotest_common.sh@1592 -- # [[ -z 0000:1a:00.0 ]] 00:05:11.991 16:15:57 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=1495669 00:05:11.991 16:15:57 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:11.991 16:15:57 -- common/autotest_common.sh@1598 -- # waitforlisten 1495669 00:05:11.991 16:15:57 -- common/autotest_common.sh@829 -- # '[' -z 1495669 ']' 00:05:11.991 16:15:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.991 16:15:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:11.991 16:15:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.991 16:15:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:11.991 16:15:57 -- common/autotest_common.sh@10 -- # set +x 00:05:11.991 [2024-07-15 16:15:57.296757] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:05:11.991 [2024-07-15 16:15:57.296827] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1495669 ] 00:05:11.991 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.991 [2024-07-15 16:15:57.373245] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.991 [2024-07-15 16:15:57.461397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.570 16:15:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:12.570 16:15:58 -- common/autotest_common.sh@862 -- # return 0 00:05:12.570 16:15:58 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:05:12.570 16:15:58 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:05:12.570 16:15:58 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:1a:00.0 00:05:15.850 nvme0n1 00:05:15.850 16:16:01 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:15.850 [2024-07-15 16:16:01.315883] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:15.850 request: 00:05:15.850 { 00:05:15.850 "nvme_ctrlr_name": "nvme0", 00:05:15.850 "password": "test", 00:05:15.850 "method": "bdev_nvme_opal_revert", 00:05:15.850 "req_id": 1 00:05:15.850 } 00:05:15.850 Got JSON-RPC error response 00:05:15.850 response: 00:05:15.850 { 00:05:15.850 "code": -32602, 00:05:15.850 "message": "Invalid parameters" 00:05:15.850 } 00:05:15.850 16:16:01 -- common/autotest_common.sh@1604 -- # true 00:05:15.850 16:16:01 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:05:15.850 16:16:01 -- common/autotest_common.sh@1608 -- # killprocess 1495669 00:05:15.850 16:16:01 -- common/autotest_common.sh@948 -- # '[' -z 1495669 ']' 00:05:15.850 16:16:01 -- common/autotest_common.sh@952 -- # kill -0 1495669 00:05:15.850 16:16:01 -- common/autotest_common.sh@953 -- # uname 00:05:15.850 16:16:01 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:15.850 16:16:01 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1495669 00:05:15.850 16:16:01 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:15.850 16:16:01 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:15.850 16:16:01 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1495669' 00:05:15.850 killing process with pid 1495669 00:05:15.850 16:16:01 -- common/autotest_common.sh@967 -- # kill 1495669 00:05:15.850 16:16:01 -- common/autotest_common.sh@972 -- # wait 1495669 00:05:20.038 16:16:05 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:20.038 16:16:05 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:20.038 16:16:05 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:20.038 16:16:05 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:20.038 16:16:05 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:20.038 16:16:05 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:20.038 16:16:05 -- common/autotest_common.sh@10 -- # set +x 00:05:20.038 16:16:05 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:20.038 16:16:05 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:05:20.038 16:16:05 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:20.038 16:16:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.038 16:16:05 -- common/autotest_common.sh@10 -- # set +x 00:05:20.038 ************************************ 00:05:20.038 START TEST env 00:05:20.038 ************************************ 00:05:20.038 16:16:05 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:05:20.038 * Looking for test storage... 00:05:20.038 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env 00:05:20.038 16:16:05 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:05:20.038 16:16:05 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:20.038 16:16:05 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.038 16:16:05 env -- common/autotest_common.sh@10 -- # set +x 00:05:20.038 ************************************ 00:05:20.038 START TEST env_memory 00:05:20.038 ************************************ 00:05:20.038 16:16:05 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:05:20.038 00:05:20.038 00:05:20.038 CUnit - A unit testing framework for C - Version 2.1-3 00:05:20.038 http://cunit.sourceforge.net/ 00:05:20.038 00:05:20.038 00:05:20.038 Suite: memory 00:05:20.038 Test: alloc and free memory map ...[2024-07-15 16:16:05.545432] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:20.038 passed 00:05:20.038 Test: mem map translation ...[2024-07-15 16:16:05.559278] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:20.038 [2024-07-15 16:16:05.559297] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:20.038 [2024-07-15 16:16:05.559328] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:20.038 [2024-07-15 16:16:05.559338] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:20.038 passed 00:05:20.038 Test: mem map registration ...[2024-07-15 16:16:05.580480] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:20.038 [2024-07-15 16:16:05.580497] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:20.038 passed 00:05:20.038 Test: mem map adjacent registrations ...passed 00:05:20.038 00:05:20.038 Run Summary: Type Total Ran Passed Failed Inactive 00:05:20.038 suites 1 1 n/a 0 0 00:05:20.038 tests 4 4 4 0 0 00:05:20.038 asserts 152 152 152 0 n/a 00:05:20.038 00:05:20.038 Elapsed time = 0.086 seconds 00:05:20.038 00:05:20.038 real 0m0.099s 00:05:20.038 user 0m0.082s 00:05:20.038 sys 0m0.016s 00:05:20.038 16:16:05 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:20.038 16:16:05 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:20.038 ************************************ 00:05:20.038 END TEST env_memory 00:05:20.038 ************************************ 00:05:20.298 16:16:05 env -- common/autotest_common.sh@1142 -- # return 0 00:05:20.298 16:16:05 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:20.298 16:16:05 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:20.298 16:16:05 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.298 16:16:05 env -- common/autotest_common.sh@10 -- # set +x 00:05:20.298 ************************************ 00:05:20.298 START TEST env_vtophys 00:05:20.298 ************************************ 00:05:20.298 16:16:05 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:20.298 EAL: lib.eal log level changed from notice to debug 00:05:20.298 EAL: Detected lcore 0 as core 0 on socket 0 00:05:20.298 EAL: Detected lcore 1 as core 1 on socket 0 00:05:20.298 EAL: Detected lcore 2 as core 2 on socket 0 00:05:20.298 EAL: Detected lcore 3 as core 3 on socket 0 00:05:20.298 EAL: Detected lcore 4 as core 4 on socket 0 00:05:20.298 EAL: Detected lcore 5 as core 8 on socket 0 00:05:20.298 EAL: Detected lcore 6 as core 9 on socket 0 00:05:20.298 EAL: Detected lcore 7 as core 10 on socket 0 00:05:20.298 EAL: Detected lcore 8 as core 11 on socket 0 00:05:20.298 EAL: Detected lcore 9 as core 16 on socket 0 00:05:20.298 EAL: Detected lcore 10 as core 17 on socket 0 00:05:20.298 EAL: Detected lcore 11 as core 18 on socket 0 00:05:20.298 EAL: Detected lcore 12 as core 19 on socket 0 00:05:20.298 EAL: Detected lcore 13 as core 20 on socket 0 00:05:20.298 EAL: Detected lcore 14 as core 24 on socket 0 00:05:20.298 EAL: Detected lcore 15 as core 25 on socket 0 00:05:20.298 EAL: Detected lcore 16 as core 26 on socket 0 00:05:20.298 EAL: Detected lcore 17 as core 27 on socket 0 00:05:20.298 EAL: Detected lcore 18 as core 0 on socket 1 00:05:20.298 EAL: Detected lcore 19 as core 1 on socket 1 00:05:20.298 EAL: Detected lcore 20 as core 2 on socket 1 00:05:20.298 EAL: Detected lcore 21 as core 3 on socket 1 00:05:20.298 EAL: Detected lcore 22 as core 4 on socket 1 00:05:20.298 EAL: Detected lcore 23 as core 8 on socket 1 00:05:20.298 EAL: Detected lcore 24 as core 9 on socket 1 00:05:20.298 EAL: Detected lcore 25 as core 10 on socket 1 00:05:20.298 EAL: Detected lcore 26 as core 11 on socket 1 00:05:20.298 EAL: Detected lcore 27 as core 16 on socket 1 00:05:20.298 EAL: Detected lcore 28 as core 17 on socket 1 00:05:20.298 EAL: Detected lcore 29 as core 18 on socket 1 00:05:20.298 EAL: Detected lcore 30 as core 19 on socket 1 00:05:20.298 EAL: Detected lcore 31 as core 20 on socket 1 00:05:20.298 EAL: Detected lcore 32 as core 24 on socket 1 00:05:20.298 EAL: Detected lcore 33 as core 25 on socket 1 00:05:20.298 EAL: Detected lcore 34 as core 26 on socket 1 00:05:20.298 EAL: Detected lcore 35 as core 27 on socket 1 00:05:20.298 EAL: Detected lcore 36 as core 0 on socket 0 00:05:20.298 EAL: Detected lcore 37 as core 1 on socket 0 00:05:20.298 EAL: Detected lcore 38 as core 2 on socket 0 00:05:20.298 EAL: Detected lcore 39 as core 3 on socket 0 00:05:20.298 EAL: Detected lcore 40 as core 4 on socket 0 00:05:20.298 EAL: Detected lcore 41 as core 8 on socket 0 00:05:20.298 EAL: Detected lcore 42 as core 9 on socket 0 00:05:20.298 EAL: Detected lcore 43 as core 10 on socket 0 00:05:20.298 EAL: Detected lcore 44 as core 11 on socket 0 00:05:20.298 EAL: Detected lcore 45 as core 16 on socket 0 00:05:20.298 EAL: Detected lcore 46 as core 17 on socket 0 00:05:20.298 EAL: Detected lcore 47 as core 18 on socket 0 00:05:20.298 EAL: Detected lcore 48 as core 19 on socket 0 00:05:20.298 EAL: Detected lcore 49 as core 20 on socket 0 00:05:20.298 EAL: Detected lcore 50 as core 24 on socket 0 00:05:20.298 EAL: Detected lcore 51 as core 25 on socket 0 00:05:20.298 EAL: Detected lcore 52 as core 26 on socket 0 00:05:20.298 EAL: Detected lcore 53 as core 27 on socket 0 00:05:20.298 EAL: Detected lcore 54 as core 0 on socket 1 00:05:20.298 EAL: Detected lcore 55 as core 1 on socket 1 00:05:20.298 EAL: Detected lcore 56 as core 2 on socket 1 00:05:20.298 EAL: Detected lcore 57 as core 3 on socket 1 00:05:20.298 EAL: Detected lcore 58 as core 4 on socket 1 00:05:20.298 EAL: Detected lcore 59 as core 8 on socket 1 00:05:20.298 EAL: Detected lcore 60 as core 9 on socket 1 00:05:20.298 EAL: Detected lcore 61 as core 10 on socket 1 00:05:20.298 EAL: Detected lcore 62 as core 11 on socket 1 00:05:20.298 EAL: Detected lcore 63 as core 16 on socket 1 00:05:20.298 EAL: Detected lcore 64 as core 17 on socket 1 00:05:20.298 EAL: Detected lcore 65 as core 18 on socket 1 00:05:20.299 EAL: Detected lcore 66 as core 19 on socket 1 00:05:20.299 EAL: Detected lcore 67 as core 20 on socket 1 00:05:20.299 EAL: Detected lcore 68 as core 24 on socket 1 00:05:20.299 EAL: Detected lcore 69 as core 25 on socket 1 00:05:20.299 EAL: Detected lcore 70 as core 26 on socket 1 00:05:20.299 EAL: Detected lcore 71 as core 27 on socket 1 00:05:20.299 EAL: Maximum logical cores by configuration: 128 00:05:20.299 EAL: Detected CPU lcores: 72 00:05:20.299 EAL: Detected NUMA nodes: 2 00:05:20.299 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:20.299 EAL: Checking presence of .so 'librte_eal.so.24' 00:05:20.299 EAL: Checking presence of .so 'librte_eal.so' 00:05:20.299 EAL: Detected static linkage of DPDK 00:05:20.299 EAL: No shared files mode enabled, IPC will be disabled 00:05:20.299 EAL: Bus pci wants IOVA as 'DC' 00:05:20.299 EAL: Buses did not request a specific IOVA mode. 00:05:20.299 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:20.299 EAL: Selected IOVA mode 'VA' 00:05:20.299 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.299 EAL: Probing VFIO support... 00:05:20.299 EAL: IOMMU type 1 (Type 1) is supported 00:05:20.299 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:20.299 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:20.299 EAL: VFIO support initialized 00:05:20.299 EAL: Ask a virtual area of 0x2e000 bytes 00:05:20.299 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:20.299 EAL: Setting up physically contiguous memory... 00:05:20.299 EAL: Setting maximum number of open files to 524288 00:05:20.299 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:20.299 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:20.299 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:20.299 EAL: Ask a virtual area of 0x61000 bytes 00:05:20.299 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:20.299 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:20.299 EAL: Ask a virtual area of 0x400000000 bytes 00:05:20.299 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:20.299 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:20.299 EAL: Ask a virtual area of 0x61000 bytes 00:05:20.299 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:20.299 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:20.299 EAL: Ask a virtual area of 0x400000000 bytes 00:05:20.299 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:20.299 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:20.299 EAL: Ask a virtual area of 0x61000 bytes 00:05:20.299 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:20.299 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:20.299 EAL: Ask a virtual area of 0x400000000 bytes 00:05:20.299 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:20.299 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:20.299 EAL: Ask a virtual area of 0x61000 bytes 00:05:20.299 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:20.299 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:20.299 EAL: Ask a virtual area of 0x400000000 bytes 00:05:20.299 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:20.299 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:20.299 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:20.299 EAL: Ask a virtual area of 0x61000 bytes 00:05:20.299 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:20.299 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:20.299 EAL: Ask a virtual area of 0x400000000 bytes 00:05:20.299 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:20.299 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:20.299 EAL: Ask a virtual area of 0x61000 bytes 00:05:20.299 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:20.299 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:20.299 EAL: Ask a virtual area of 0x400000000 bytes 00:05:20.299 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:20.299 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:20.299 EAL: Ask a virtual area of 0x61000 bytes 00:05:20.299 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:20.299 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:20.299 EAL: Ask a virtual area of 0x400000000 bytes 00:05:20.299 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:20.299 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:20.299 EAL: Ask a virtual area of 0x61000 bytes 00:05:20.299 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:20.299 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:20.299 EAL: Ask a virtual area of 0x400000000 bytes 00:05:20.299 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:20.299 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:20.299 EAL: Hugepages will be freed exactly as allocated. 00:05:20.299 EAL: No shared files mode enabled, IPC is disabled 00:05:20.299 EAL: No shared files mode enabled, IPC is disabled 00:05:20.299 EAL: TSC frequency is ~2300000 KHz 00:05:20.299 EAL: Main lcore 0 is ready (tid=7f1f7c711a00;cpuset=[0]) 00:05:20.299 EAL: Trying to obtain current memory policy. 00:05:20.299 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.299 EAL: Restoring previous memory policy: 0 00:05:20.299 EAL: request: mp_malloc_sync 00:05:20.299 EAL: No shared files mode enabled, IPC is disabled 00:05:20.299 EAL: Heap on socket 0 was expanded by 2MB 00:05:20.299 EAL: No shared files mode enabled, IPC is disabled 00:05:20.299 EAL: Mem event callback 'spdk:(nil)' registered 00:05:20.299 00:05:20.299 00:05:20.299 CUnit - A unit testing framework for C - Version 2.1-3 00:05:20.299 http://cunit.sourceforge.net/ 00:05:20.299 00:05:20.299 00:05:20.299 Suite: components_suite 00:05:20.299 Test: vtophys_malloc_test ...passed 00:05:20.299 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:20.299 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.299 EAL: Restoring previous memory policy: 4 00:05:20.299 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.299 EAL: request: mp_malloc_sync 00:05:20.299 EAL: No shared files mode enabled, IPC is disabled 00:05:20.299 EAL: Heap on socket 0 was expanded by 4MB 00:05:20.299 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.299 EAL: request: mp_malloc_sync 00:05:20.299 EAL: No shared files mode enabled, IPC is disabled 00:05:20.299 EAL: Heap on socket 0 was shrunk by 4MB 00:05:20.299 EAL: Trying to obtain current memory policy. 00:05:20.299 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.299 EAL: Restoring previous memory policy: 4 00:05:20.299 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.299 EAL: request: mp_malloc_sync 00:05:20.299 EAL: No shared files mode enabled, IPC is disabled 00:05:20.299 EAL: Heap on socket 0 was expanded by 6MB 00:05:20.299 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.299 EAL: request: mp_malloc_sync 00:05:20.299 EAL: No shared files mode enabled, IPC is disabled 00:05:20.299 EAL: Heap on socket 0 was shrunk by 6MB 00:05:20.299 EAL: Trying to obtain current memory policy. 00:05:20.299 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.299 EAL: Restoring previous memory policy: 4 00:05:20.299 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.299 EAL: request: mp_malloc_sync 00:05:20.299 EAL: No shared files mode enabled, IPC is disabled 00:05:20.299 EAL: Heap on socket 0 was expanded by 10MB 00:05:20.299 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.299 EAL: request: mp_malloc_sync 00:05:20.299 EAL: No shared files mode enabled, IPC is disabled 00:05:20.299 EAL: Heap on socket 0 was shrunk by 10MB 00:05:20.299 EAL: Trying to obtain current memory policy. 00:05:20.299 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.299 EAL: Restoring previous memory policy: 4 00:05:20.299 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.299 EAL: request: mp_malloc_sync 00:05:20.299 EAL: No shared files mode enabled, IPC is disabled 00:05:20.299 EAL: Heap on socket 0 was expanded by 18MB 00:05:20.299 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.299 EAL: request: mp_malloc_sync 00:05:20.299 EAL: No shared files mode enabled, IPC is disabled 00:05:20.299 EAL: Heap on socket 0 was shrunk by 18MB 00:05:20.299 EAL: Trying to obtain current memory policy. 00:05:20.299 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.299 EAL: Restoring previous memory policy: 4 00:05:20.299 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.299 EAL: request: mp_malloc_sync 00:05:20.299 EAL: No shared files mode enabled, IPC is disabled 00:05:20.299 EAL: Heap on socket 0 was expanded by 34MB 00:05:20.299 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.299 EAL: request: mp_malloc_sync 00:05:20.299 EAL: No shared files mode enabled, IPC is disabled 00:05:20.299 EAL: Heap on socket 0 was shrunk by 34MB 00:05:20.299 EAL: Trying to obtain current memory policy. 00:05:20.299 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.299 EAL: Restoring previous memory policy: 4 00:05:20.299 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.299 EAL: request: mp_malloc_sync 00:05:20.299 EAL: No shared files mode enabled, IPC is disabled 00:05:20.299 EAL: Heap on socket 0 was expanded by 66MB 00:05:20.299 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.299 EAL: request: mp_malloc_sync 00:05:20.299 EAL: No shared files mode enabled, IPC is disabled 00:05:20.299 EAL: Heap on socket 0 was shrunk by 66MB 00:05:20.299 EAL: Trying to obtain current memory policy. 00:05:20.299 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.558 EAL: Restoring previous memory policy: 4 00:05:20.558 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.558 EAL: request: mp_malloc_sync 00:05:20.558 EAL: No shared files mode enabled, IPC is disabled 00:05:20.558 EAL: Heap on socket 0 was expanded by 130MB 00:05:20.558 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.558 EAL: request: mp_malloc_sync 00:05:20.558 EAL: No shared files mode enabled, IPC is disabled 00:05:20.558 EAL: Heap on socket 0 was shrunk by 130MB 00:05:20.558 EAL: Trying to obtain current memory policy. 00:05:20.558 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.558 EAL: Restoring previous memory policy: 4 00:05:20.558 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.558 EAL: request: mp_malloc_sync 00:05:20.558 EAL: No shared files mode enabled, IPC is disabled 00:05:20.558 EAL: Heap on socket 0 was expanded by 258MB 00:05:20.558 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.558 EAL: request: mp_malloc_sync 00:05:20.558 EAL: No shared files mode enabled, IPC is disabled 00:05:20.558 EAL: Heap on socket 0 was shrunk by 258MB 00:05:20.558 EAL: Trying to obtain current memory policy. 00:05:20.558 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.816 EAL: Restoring previous memory policy: 4 00:05:20.816 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.816 EAL: request: mp_malloc_sync 00:05:20.816 EAL: No shared files mode enabled, IPC is disabled 00:05:20.816 EAL: Heap on socket 0 was expanded by 514MB 00:05:20.816 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.816 EAL: request: mp_malloc_sync 00:05:20.816 EAL: No shared files mode enabled, IPC is disabled 00:05:20.816 EAL: Heap on socket 0 was shrunk by 514MB 00:05:20.816 EAL: Trying to obtain current memory policy. 00:05:20.816 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:21.075 EAL: Restoring previous memory policy: 4 00:05:21.075 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.075 EAL: request: mp_malloc_sync 00:05:21.075 EAL: No shared files mode enabled, IPC is disabled 00:05:21.075 EAL: Heap on socket 0 was expanded by 1026MB 00:05:21.333 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.592 EAL: request: mp_malloc_sync 00:05:21.592 EAL: No shared files mode enabled, IPC is disabled 00:05:21.592 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:21.592 passed 00:05:21.592 00:05:21.592 Run Summary: Type Total Ran Passed Failed Inactive 00:05:21.592 suites 1 1 n/a 0 0 00:05:21.592 tests 2 2 2 0 0 00:05:21.592 asserts 497 497 497 0 n/a 00:05:21.592 00:05:21.592 Elapsed time = 1.114 seconds 00:05:21.592 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.592 EAL: request: mp_malloc_sync 00:05:21.592 EAL: No shared files mode enabled, IPC is disabled 00:05:21.592 EAL: Heap on socket 0 was shrunk by 2MB 00:05:21.592 EAL: No shared files mode enabled, IPC is disabled 00:05:21.592 EAL: No shared files mode enabled, IPC is disabled 00:05:21.592 EAL: No shared files mode enabled, IPC is disabled 00:05:21.592 00:05:21.592 real 0m1.249s 00:05:21.592 user 0m0.720s 00:05:21.592 sys 0m0.494s 00:05:21.592 16:16:06 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.592 16:16:06 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:21.592 ************************************ 00:05:21.592 END TEST env_vtophys 00:05:21.592 ************************************ 00:05:21.592 16:16:06 env -- common/autotest_common.sh@1142 -- # return 0 00:05:21.592 16:16:06 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:05:21.592 16:16:06 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:21.592 16:16:06 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.592 16:16:06 env -- common/autotest_common.sh@10 -- # set +x 00:05:21.592 ************************************ 00:05:21.592 START TEST env_pci 00:05:21.592 ************************************ 00:05:21.592 16:16:07 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:05:21.592 00:05:21.592 00:05:21.592 CUnit - A unit testing framework for C - Version 2.1-3 00:05:21.592 http://cunit.sourceforge.net/ 00:05:21.592 00:05:21.592 00:05:21.592 Suite: pci 00:05:21.592 Test: pci_hook ...[2024-07-15 16:16:07.038115] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/pci.c:1041:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1496991 has claimed it 00:05:21.592 EAL: Cannot find device (10000:00:01.0) 00:05:21.592 EAL: Failed to attach device on primary process 00:05:21.592 passed 00:05:21.592 00:05:21.592 Run Summary: Type Total Ran Passed Failed Inactive 00:05:21.592 suites 1 1 n/a 0 0 00:05:21.592 tests 1 1 1 0 0 00:05:21.592 asserts 25 25 25 0 n/a 00:05:21.592 00:05:21.592 Elapsed time = 0.033 seconds 00:05:21.592 00:05:21.592 real 0m0.052s 00:05:21.592 user 0m0.015s 00:05:21.592 sys 0m0.037s 00:05:21.592 16:16:07 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.592 16:16:07 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:21.592 ************************************ 00:05:21.592 END TEST env_pci 00:05:21.592 ************************************ 00:05:21.592 16:16:07 env -- common/autotest_common.sh@1142 -- # return 0 00:05:21.592 16:16:07 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:21.592 16:16:07 env -- env/env.sh@15 -- # uname 00:05:21.592 16:16:07 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:21.592 16:16:07 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:21.592 16:16:07 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:21.592 16:16:07 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:21.592 16:16:07 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.592 16:16:07 env -- common/autotest_common.sh@10 -- # set +x 00:05:21.592 ************************************ 00:05:21.592 START TEST env_dpdk_post_init 00:05:21.592 ************************************ 00:05:21.592 16:16:07 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:21.851 EAL: Detected CPU lcores: 72 00:05:21.851 EAL: Detected NUMA nodes: 2 00:05:21.851 EAL: Detected static linkage of DPDK 00:05:21.851 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:21.851 EAL: Selected IOVA mode 'VA' 00:05:21.851 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.851 EAL: VFIO support initialized 00:05:21.851 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:21.851 EAL: Using IOMMU type 1 (Type 1) 00:05:22.785 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:1a:00.0 (socket 0) 00:05:28.046 EAL: Releasing PCI mapped resource for 0000:1a:00.0 00:05:28.046 EAL: Calling pci_unmap_resource for 0000:1a:00.0 at 0x202001000000 00:05:28.305 Starting DPDK initialization... 00:05:28.305 Starting SPDK post initialization... 00:05:28.305 SPDK NVMe probe 00:05:28.305 Attaching to 0000:1a:00.0 00:05:28.305 Attached to 0000:1a:00.0 00:05:28.305 Cleaning up... 00:05:28.305 00:05:28.305 real 0m6.479s 00:05:28.305 user 0m4.953s 00:05:28.305 sys 0m0.775s 00:05:28.305 16:16:13 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.305 16:16:13 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:28.305 ************************************ 00:05:28.305 END TEST env_dpdk_post_init 00:05:28.305 ************************************ 00:05:28.305 16:16:13 env -- common/autotest_common.sh@1142 -- # return 0 00:05:28.305 16:16:13 env -- env/env.sh@26 -- # uname 00:05:28.305 16:16:13 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:28.305 16:16:13 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:28.305 16:16:13 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:28.305 16:16:13 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.305 16:16:13 env -- common/autotest_common.sh@10 -- # set +x 00:05:28.305 ************************************ 00:05:28.305 START TEST env_mem_callbacks 00:05:28.305 ************************************ 00:05:28.305 16:16:13 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:28.305 EAL: Detected CPU lcores: 72 00:05:28.305 EAL: Detected NUMA nodes: 2 00:05:28.305 EAL: Detected static linkage of DPDK 00:05:28.305 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:28.305 EAL: Selected IOVA mode 'VA' 00:05:28.305 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.305 EAL: VFIO support initialized 00:05:28.305 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:28.305 00:05:28.305 00:05:28.305 CUnit - A unit testing framework for C - Version 2.1-3 00:05:28.305 http://cunit.sourceforge.net/ 00:05:28.305 00:05:28.305 00:05:28.305 Suite: memory 00:05:28.305 Test: test ... 00:05:28.305 register 0x200000200000 2097152 00:05:28.305 malloc 3145728 00:05:28.305 register 0x200000400000 4194304 00:05:28.305 buf 0x200000500000 len 3145728 PASSED 00:05:28.305 malloc 64 00:05:28.305 buf 0x2000004fff40 len 64 PASSED 00:05:28.305 malloc 4194304 00:05:28.305 register 0x200000800000 6291456 00:05:28.305 buf 0x200000a00000 len 4194304 PASSED 00:05:28.305 free 0x200000500000 3145728 00:05:28.305 free 0x2000004fff40 64 00:05:28.305 unregister 0x200000400000 4194304 PASSED 00:05:28.305 free 0x200000a00000 4194304 00:05:28.305 unregister 0x200000800000 6291456 PASSED 00:05:28.305 malloc 8388608 00:05:28.305 register 0x200000400000 10485760 00:05:28.305 buf 0x200000600000 len 8388608 PASSED 00:05:28.305 free 0x200000600000 8388608 00:05:28.305 unregister 0x200000400000 10485760 PASSED 00:05:28.305 passed 00:05:28.305 00:05:28.305 Run Summary: Type Total Ran Passed Failed Inactive 00:05:28.305 suites 1 1 n/a 0 0 00:05:28.305 tests 1 1 1 0 0 00:05:28.305 asserts 15 15 15 0 n/a 00:05:28.305 00:05:28.305 Elapsed time = 0.005 seconds 00:05:28.305 00:05:28.305 real 0m0.069s 00:05:28.305 user 0m0.026s 00:05:28.305 sys 0m0.043s 00:05:28.305 16:16:13 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.305 16:16:13 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:28.305 ************************************ 00:05:28.305 END TEST env_mem_callbacks 00:05:28.305 ************************************ 00:05:28.305 16:16:13 env -- common/autotest_common.sh@1142 -- # return 0 00:05:28.305 00:05:28.305 real 0m8.448s 00:05:28.305 user 0m5.996s 00:05:28.305 sys 0m1.699s 00:05:28.305 16:16:13 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.305 16:16:13 env -- common/autotest_common.sh@10 -- # set +x 00:05:28.305 ************************************ 00:05:28.305 END TEST env 00:05:28.305 ************************************ 00:05:28.305 16:16:13 -- common/autotest_common.sh@1142 -- # return 0 00:05:28.305 16:16:13 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:05:28.305 16:16:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:28.305 16:16:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.305 16:16:13 -- common/autotest_common.sh@10 -- # set +x 00:05:28.564 ************************************ 00:05:28.564 START TEST rpc 00:05:28.564 ************************************ 00:05:28.564 16:16:13 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:05:28.564 * Looking for test storage... 00:05:28.564 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:28.564 16:16:13 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1497984 00:05:28.564 16:16:13 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:28.564 16:16:13 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:28.564 16:16:13 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1497984 00:05:28.564 16:16:13 rpc -- common/autotest_common.sh@829 -- # '[' -z 1497984 ']' 00:05:28.564 16:16:13 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.564 16:16:13 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:28.564 16:16:13 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.564 16:16:13 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:28.564 16:16:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.564 [2024-07-15 16:16:14.023234] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:05:28.564 [2024-07-15 16:16:14.023304] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1497984 ] 00:05:28.564 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.564 [2024-07-15 16:16:14.100282] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.822 [2024-07-15 16:16:14.190440] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:28.822 [2024-07-15 16:16:14.190478] app.c: 607:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1497984' to capture a snapshot of events at runtime. 00:05:28.822 [2024-07-15 16:16:14.190489] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:28.822 [2024-07-15 16:16:14.190498] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:28.822 [2024-07-15 16:16:14.190506] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1497984 for offline analysis/debug. 00:05:28.822 [2024-07-15 16:16:14.190539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.423 16:16:14 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:29.423 16:16:14 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:29.423 16:16:14 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:29.423 16:16:14 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:29.423 16:16:14 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:29.423 16:16:14 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:29.423 16:16:14 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:29.423 16:16:14 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.423 16:16:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.423 ************************************ 00:05:29.423 START TEST rpc_integrity 00:05:29.423 ************************************ 00:05:29.423 16:16:14 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:29.423 16:16:14 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:29.423 16:16:14 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.423 16:16:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.423 16:16:14 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:29.423 16:16:14 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:29.423 16:16:14 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:29.423 16:16:14 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:29.423 16:16:14 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:29.423 16:16:14 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.423 16:16:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.423 16:16:14 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:29.423 16:16:14 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:29.423 16:16:14 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:29.423 16:16:14 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.423 16:16:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.423 16:16:14 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:29.423 16:16:14 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:29.423 { 00:05:29.423 "name": "Malloc0", 00:05:29.423 "aliases": [ 00:05:29.423 "0e1ae7df-9256-4413-86e2-1d3fe0a8de82" 00:05:29.423 ], 00:05:29.423 "product_name": "Malloc disk", 00:05:29.423 "block_size": 512, 00:05:29.423 "num_blocks": 16384, 00:05:29.423 "uuid": "0e1ae7df-9256-4413-86e2-1d3fe0a8de82", 00:05:29.423 "assigned_rate_limits": { 00:05:29.423 "rw_ios_per_sec": 0, 00:05:29.423 "rw_mbytes_per_sec": 0, 00:05:29.423 "r_mbytes_per_sec": 0, 00:05:29.423 "w_mbytes_per_sec": 0 00:05:29.423 }, 00:05:29.423 "claimed": false, 00:05:29.423 "zoned": false, 00:05:29.423 "supported_io_types": { 00:05:29.423 "read": true, 00:05:29.423 "write": true, 00:05:29.423 "unmap": true, 00:05:29.423 "flush": true, 00:05:29.423 "reset": true, 00:05:29.423 "nvme_admin": false, 00:05:29.423 "nvme_io": false, 00:05:29.423 "nvme_io_md": false, 00:05:29.423 "write_zeroes": true, 00:05:29.423 "zcopy": true, 00:05:29.423 "get_zone_info": false, 00:05:29.423 "zone_management": false, 00:05:29.423 "zone_append": false, 00:05:29.423 "compare": false, 00:05:29.423 "compare_and_write": false, 00:05:29.423 "abort": true, 00:05:29.423 "seek_hole": false, 00:05:29.423 "seek_data": false, 00:05:29.423 "copy": true, 00:05:29.423 "nvme_iov_md": false 00:05:29.423 }, 00:05:29.423 "memory_domains": [ 00:05:29.423 { 00:05:29.423 "dma_device_id": "system", 00:05:29.423 "dma_device_type": 1 00:05:29.423 }, 00:05:29.423 { 00:05:29.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:29.423 "dma_device_type": 2 00:05:29.423 } 00:05:29.423 ], 00:05:29.423 "driver_specific": {} 00:05:29.423 } 00:05:29.423 ]' 00:05:29.423 16:16:14 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:29.682 16:16:15 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:29.682 16:16:15 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:29.682 16:16:15 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.682 16:16:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.682 [2024-07-15 16:16:15.033776] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:29.682 [2024-07-15 16:16:15.033815] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:29.682 [2024-07-15 16:16:15.033833] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x4e60650 00:05:29.682 [2024-07-15 16:16:15.033842] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:29.682 [2024-07-15 16:16:15.034711] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:29.682 [2024-07-15 16:16:15.034736] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:29.682 Passthru0 00:05:29.682 16:16:15 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:29.682 16:16:15 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:29.682 16:16:15 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.682 16:16:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.682 16:16:15 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:29.682 16:16:15 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:29.682 { 00:05:29.682 "name": "Malloc0", 00:05:29.682 "aliases": [ 00:05:29.682 "0e1ae7df-9256-4413-86e2-1d3fe0a8de82" 00:05:29.682 ], 00:05:29.682 "product_name": "Malloc disk", 00:05:29.682 "block_size": 512, 00:05:29.682 "num_blocks": 16384, 00:05:29.682 "uuid": "0e1ae7df-9256-4413-86e2-1d3fe0a8de82", 00:05:29.682 "assigned_rate_limits": { 00:05:29.682 "rw_ios_per_sec": 0, 00:05:29.682 "rw_mbytes_per_sec": 0, 00:05:29.682 "r_mbytes_per_sec": 0, 00:05:29.682 "w_mbytes_per_sec": 0 00:05:29.682 }, 00:05:29.682 "claimed": true, 00:05:29.682 "claim_type": "exclusive_write", 00:05:29.682 "zoned": false, 00:05:29.682 "supported_io_types": { 00:05:29.682 "read": true, 00:05:29.682 "write": true, 00:05:29.682 "unmap": true, 00:05:29.682 "flush": true, 00:05:29.682 "reset": true, 00:05:29.682 "nvme_admin": false, 00:05:29.682 "nvme_io": false, 00:05:29.682 "nvme_io_md": false, 00:05:29.682 "write_zeroes": true, 00:05:29.682 "zcopy": true, 00:05:29.682 "get_zone_info": false, 00:05:29.682 "zone_management": false, 00:05:29.682 "zone_append": false, 00:05:29.682 "compare": false, 00:05:29.682 "compare_and_write": false, 00:05:29.682 "abort": true, 00:05:29.682 "seek_hole": false, 00:05:29.682 "seek_data": false, 00:05:29.682 "copy": true, 00:05:29.682 "nvme_iov_md": false 00:05:29.682 }, 00:05:29.682 "memory_domains": [ 00:05:29.682 { 00:05:29.682 "dma_device_id": "system", 00:05:29.682 "dma_device_type": 1 00:05:29.682 }, 00:05:29.682 { 00:05:29.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:29.682 "dma_device_type": 2 00:05:29.682 } 00:05:29.682 ], 00:05:29.682 "driver_specific": {} 00:05:29.682 }, 00:05:29.682 { 00:05:29.682 "name": "Passthru0", 00:05:29.682 "aliases": [ 00:05:29.682 "0d5239fe-d7b1-5529-89fe-65861c542ee4" 00:05:29.682 ], 00:05:29.682 "product_name": "passthru", 00:05:29.682 "block_size": 512, 00:05:29.682 "num_blocks": 16384, 00:05:29.682 "uuid": "0d5239fe-d7b1-5529-89fe-65861c542ee4", 00:05:29.682 "assigned_rate_limits": { 00:05:29.682 "rw_ios_per_sec": 0, 00:05:29.682 "rw_mbytes_per_sec": 0, 00:05:29.682 "r_mbytes_per_sec": 0, 00:05:29.682 "w_mbytes_per_sec": 0 00:05:29.682 }, 00:05:29.682 "claimed": false, 00:05:29.682 "zoned": false, 00:05:29.682 "supported_io_types": { 00:05:29.682 "read": true, 00:05:29.682 "write": true, 00:05:29.682 "unmap": true, 00:05:29.682 "flush": true, 00:05:29.682 "reset": true, 00:05:29.682 "nvme_admin": false, 00:05:29.682 "nvme_io": false, 00:05:29.682 "nvme_io_md": false, 00:05:29.682 "write_zeroes": true, 00:05:29.682 "zcopy": true, 00:05:29.682 "get_zone_info": false, 00:05:29.682 "zone_management": false, 00:05:29.682 "zone_append": false, 00:05:29.682 "compare": false, 00:05:29.682 "compare_and_write": false, 00:05:29.682 "abort": true, 00:05:29.682 "seek_hole": false, 00:05:29.682 "seek_data": false, 00:05:29.682 "copy": true, 00:05:29.682 "nvme_iov_md": false 00:05:29.682 }, 00:05:29.682 "memory_domains": [ 00:05:29.682 { 00:05:29.682 "dma_device_id": "system", 00:05:29.682 "dma_device_type": 1 00:05:29.682 }, 00:05:29.682 { 00:05:29.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:29.682 "dma_device_type": 2 00:05:29.682 } 00:05:29.682 ], 00:05:29.683 "driver_specific": { 00:05:29.683 "passthru": { 00:05:29.683 "name": "Passthru0", 00:05:29.683 "base_bdev_name": "Malloc0" 00:05:29.683 } 00:05:29.683 } 00:05:29.683 } 00:05:29.683 ]' 00:05:29.683 16:16:15 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:29.683 16:16:15 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:29.683 16:16:15 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:29.683 16:16:15 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.683 16:16:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.683 16:16:15 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:29.683 16:16:15 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:29.683 16:16:15 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.683 16:16:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.683 16:16:15 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:29.683 16:16:15 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:29.683 16:16:15 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.683 16:16:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.683 16:16:15 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:29.683 16:16:15 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:29.683 16:16:15 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:29.683 16:16:15 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:29.683 00:05:29.683 real 0m0.296s 00:05:29.683 user 0m0.178s 00:05:29.683 sys 0m0.054s 00:05:29.683 16:16:15 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.683 16:16:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.683 ************************************ 00:05:29.683 END TEST rpc_integrity 00:05:29.683 ************************************ 00:05:29.683 16:16:15 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:29.683 16:16:15 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:29.683 16:16:15 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:29.683 16:16:15 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.683 16:16:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.941 ************************************ 00:05:29.941 START TEST rpc_plugins 00:05:29.941 ************************************ 00:05:29.941 16:16:15 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:29.941 16:16:15 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:29.941 16:16:15 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.941 16:16:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:29.941 16:16:15 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:29.941 16:16:15 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:29.941 16:16:15 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:29.941 16:16:15 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.941 16:16:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:29.941 16:16:15 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:29.941 16:16:15 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:29.941 { 00:05:29.941 "name": "Malloc1", 00:05:29.941 "aliases": [ 00:05:29.941 "449dffff-fea9-4587-a321-52af19d73982" 00:05:29.941 ], 00:05:29.941 "product_name": "Malloc disk", 00:05:29.941 "block_size": 4096, 00:05:29.941 "num_blocks": 256, 00:05:29.941 "uuid": "449dffff-fea9-4587-a321-52af19d73982", 00:05:29.941 "assigned_rate_limits": { 00:05:29.941 "rw_ios_per_sec": 0, 00:05:29.941 "rw_mbytes_per_sec": 0, 00:05:29.941 "r_mbytes_per_sec": 0, 00:05:29.941 "w_mbytes_per_sec": 0 00:05:29.941 }, 00:05:29.941 "claimed": false, 00:05:29.941 "zoned": false, 00:05:29.941 "supported_io_types": { 00:05:29.941 "read": true, 00:05:29.941 "write": true, 00:05:29.941 "unmap": true, 00:05:29.941 "flush": true, 00:05:29.941 "reset": true, 00:05:29.941 "nvme_admin": false, 00:05:29.941 "nvme_io": false, 00:05:29.941 "nvme_io_md": false, 00:05:29.941 "write_zeroes": true, 00:05:29.941 "zcopy": true, 00:05:29.941 "get_zone_info": false, 00:05:29.941 "zone_management": false, 00:05:29.941 "zone_append": false, 00:05:29.941 "compare": false, 00:05:29.941 "compare_and_write": false, 00:05:29.941 "abort": true, 00:05:29.941 "seek_hole": false, 00:05:29.941 "seek_data": false, 00:05:29.941 "copy": true, 00:05:29.941 "nvme_iov_md": false 00:05:29.941 }, 00:05:29.941 "memory_domains": [ 00:05:29.941 { 00:05:29.941 "dma_device_id": "system", 00:05:29.941 "dma_device_type": 1 00:05:29.941 }, 00:05:29.941 { 00:05:29.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:29.942 "dma_device_type": 2 00:05:29.942 } 00:05:29.942 ], 00:05:29.942 "driver_specific": {} 00:05:29.942 } 00:05:29.942 ]' 00:05:29.942 16:16:15 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:29.942 16:16:15 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:29.942 16:16:15 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:29.942 16:16:15 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.942 16:16:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:29.942 16:16:15 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:29.942 16:16:15 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:29.942 16:16:15 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.942 16:16:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:29.942 16:16:15 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:29.942 16:16:15 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:29.942 16:16:15 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:29.942 16:16:15 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:29.942 00:05:29.942 real 0m0.149s 00:05:29.942 user 0m0.096s 00:05:29.942 sys 0m0.020s 00:05:29.942 16:16:15 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.942 16:16:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:29.942 ************************************ 00:05:29.942 END TEST rpc_plugins 00:05:29.942 ************************************ 00:05:29.942 16:16:15 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:29.942 16:16:15 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:29.942 16:16:15 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:29.942 16:16:15 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.942 16:16:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.942 ************************************ 00:05:29.942 START TEST rpc_trace_cmd_test 00:05:29.942 ************************************ 00:05:29.942 16:16:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:29.942 16:16:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:29.942 16:16:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:29.942 16:16:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.942 16:16:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:29.942 16:16:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:29.942 16:16:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:29.942 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1497984", 00:05:29.942 "tpoint_group_mask": "0x8", 00:05:29.942 "iscsi_conn": { 00:05:29.942 "mask": "0x2", 00:05:29.942 "tpoint_mask": "0x0" 00:05:29.942 }, 00:05:29.942 "scsi": { 00:05:29.942 "mask": "0x4", 00:05:29.942 "tpoint_mask": "0x0" 00:05:29.942 }, 00:05:29.942 "bdev": { 00:05:29.942 "mask": "0x8", 00:05:29.942 "tpoint_mask": "0xffffffffffffffff" 00:05:29.942 }, 00:05:29.942 "nvmf_rdma": { 00:05:29.942 "mask": "0x10", 00:05:29.942 "tpoint_mask": "0x0" 00:05:29.942 }, 00:05:29.942 "nvmf_tcp": { 00:05:29.942 "mask": "0x20", 00:05:29.942 "tpoint_mask": "0x0" 00:05:29.942 }, 00:05:29.942 "ftl": { 00:05:29.942 "mask": "0x40", 00:05:29.942 "tpoint_mask": "0x0" 00:05:29.942 }, 00:05:29.942 "blobfs": { 00:05:29.942 "mask": "0x80", 00:05:29.942 "tpoint_mask": "0x0" 00:05:29.942 }, 00:05:29.942 "dsa": { 00:05:29.942 "mask": "0x200", 00:05:29.942 "tpoint_mask": "0x0" 00:05:29.942 }, 00:05:29.942 "thread": { 00:05:29.942 "mask": "0x400", 00:05:29.942 "tpoint_mask": "0x0" 00:05:29.942 }, 00:05:29.942 "nvme_pcie": { 00:05:29.942 "mask": "0x800", 00:05:29.942 "tpoint_mask": "0x0" 00:05:29.942 }, 00:05:29.942 "iaa": { 00:05:29.942 "mask": "0x1000", 00:05:29.942 "tpoint_mask": "0x0" 00:05:29.942 }, 00:05:29.942 "nvme_tcp": { 00:05:29.942 "mask": "0x2000", 00:05:29.942 "tpoint_mask": "0x0" 00:05:29.942 }, 00:05:29.942 "bdev_nvme": { 00:05:29.942 "mask": "0x4000", 00:05:29.942 "tpoint_mask": "0x0" 00:05:29.942 }, 00:05:29.942 "sock": { 00:05:29.942 "mask": "0x8000", 00:05:29.942 "tpoint_mask": "0x0" 00:05:29.942 } 00:05:29.942 }' 00:05:29.942 16:16:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:30.200 16:16:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:30.200 16:16:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:30.200 16:16:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:30.200 16:16:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:30.200 16:16:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:30.200 16:16:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:30.200 16:16:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:30.200 16:16:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:30.200 16:16:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:30.200 00:05:30.200 real 0m0.240s 00:05:30.200 user 0m0.200s 00:05:30.200 sys 0m0.032s 00:05:30.200 16:16:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.200 16:16:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:30.200 ************************************ 00:05:30.200 END TEST rpc_trace_cmd_test 00:05:30.200 ************************************ 00:05:30.200 16:16:15 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:30.200 16:16:15 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:30.200 16:16:15 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:30.200 16:16:15 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:30.200 16:16:15 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:30.200 16:16:15 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.200 16:16:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.459 ************************************ 00:05:30.459 START TEST rpc_daemon_integrity 00:05:30.459 ************************************ 00:05:30.459 16:16:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:30.459 16:16:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:30.459 16:16:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.459 16:16:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.459 16:16:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.459 16:16:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:30.459 16:16:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:30.459 16:16:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:30.459 16:16:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:30.459 16:16:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.459 16:16:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.459 16:16:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.459 16:16:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:30.459 16:16:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:30.459 16:16:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.459 16:16:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.459 16:16:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.459 16:16:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:30.459 { 00:05:30.459 "name": "Malloc2", 00:05:30.459 "aliases": [ 00:05:30.459 "fad68727-c2ad-44da-805b-d3d5487dce51" 00:05:30.459 ], 00:05:30.459 "product_name": "Malloc disk", 00:05:30.459 "block_size": 512, 00:05:30.459 "num_blocks": 16384, 00:05:30.459 "uuid": "fad68727-c2ad-44da-805b-d3d5487dce51", 00:05:30.459 "assigned_rate_limits": { 00:05:30.459 "rw_ios_per_sec": 0, 00:05:30.459 "rw_mbytes_per_sec": 0, 00:05:30.459 "r_mbytes_per_sec": 0, 00:05:30.459 "w_mbytes_per_sec": 0 00:05:30.459 }, 00:05:30.459 "claimed": false, 00:05:30.459 "zoned": false, 00:05:30.459 "supported_io_types": { 00:05:30.459 "read": true, 00:05:30.459 "write": true, 00:05:30.459 "unmap": true, 00:05:30.459 "flush": true, 00:05:30.459 "reset": true, 00:05:30.459 "nvme_admin": false, 00:05:30.459 "nvme_io": false, 00:05:30.459 "nvme_io_md": false, 00:05:30.459 "write_zeroes": true, 00:05:30.459 "zcopy": true, 00:05:30.459 "get_zone_info": false, 00:05:30.459 "zone_management": false, 00:05:30.459 "zone_append": false, 00:05:30.459 "compare": false, 00:05:30.459 "compare_and_write": false, 00:05:30.459 "abort": true, 00:05:30.459 "seek_hole": false, 00:05:30.459 "seek_data": false, 00:05:30.459 "copy": true, 00:05:30.459 "nvme_iov_md": false 00:05:30.459 }, 00:05:30.459 "memory_domains": [ 00:05:30.459 { 00:05:30.459 "dma_device_id": "system", 00:05:30.459 "dma_device_type": 1 00:05:30.459 }, 00:05:30.459 { 00:05:30.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.459 "dma_device_type": 2 00:05:30.459 } 00:05:30.459 ], 00:05:30.459 "driver_specific": {} 00:05:30.459 } 00:05:30.459 ]' 00:05:30.459 16:16:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:30.459 16:16:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:30.459 16:16:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:30.459 16:16:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.459 16:16:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.459 [2024-07-15 16:16:15.952137] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:30.459 [2024-07-15 16:16:15.952173] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:30.459 [2024-07-15 16:16:15.952189] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x4fee170 00:05:30.459 [2024-07-15 16:16:15.952199] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:30.459 [2024-07-15 16:16:15.952936] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:30.459 [2024-07-15 16:16:15.952958] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:30.459 Passthru0 00:05:30.459 16:16:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.459 16:16:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:30.459 16:16:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.459 16:16:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.459 16:16:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.459 16:16:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:30.459 { 00:05:30.459 "name": "Malloc2", 00:05:30.459 "aliases": [ 00:05:30.459 "fad68727-c2ad-44da-805b-d3d5487dce51" 00:05:30.459 ], 00:05:30.459 "product_name": "Malloc disk", 00:05:30.459 "block_size": 512, 00:05:30.459 "num_blocks": 16384, 00:05:30.459 "uuid": "fad68727-c2ad-44da-805b-d3d5487dce51", 00:05:30.459 "assigned_rate_limits": { 00:05:30.459 "rw_ios_per_sec": 0, 00:05:30.459 "rw_mbytes_per_sec": 0, 00:05:30.459 "r_mbytes_per_sec": 0, 00:05:30.459 "w_mbytes_per_sec": 0 00:05:30.459 }, 00:05:30.459 "claimed": true, 00:05:30.459 "claim_type": "exclusive_write", 00:05:30.459 "zoned": false, 00:05:30.459 "supported_io_types": { 00:05:30.459 "read": true, 00:05:30.459 "write": true, 00:05:30.459 "unmap": true, 00:05:30.459 "flush": true, 00:05:30.459 "reset": true, 00:05:30.459 "nvme_admin": false, 00:05:30.459 "nvme_io": false, 00:05:30.459 "nvme_io_md": false, 00:05:30.459 "write_zeroes": true, 00:05:30.459 "zcopy": true, 00:05:30.459 "get_zone_info": false, 00:05:30.459 "zone_management": false, 00:05:30.459 "zone_append": false, 00:05:30.459 "compare": false, 00:05:30.459 "compare_and_write": false, 00:05:30.459 "abort": true, 00:05:30.459 "seek_hole": false, 00:05:30.459 "seek_data": false, 00:05:30.459 "copy": true, 00:05:30.459 "nvme_iov_md": false 00:05:30.459 }, 00:05:30.459 "memory_domains": [ 00:05:30.459 { 00:05:30.459 "dma_device_id": "system", 00:05:30.459 "dma_device_type": 1 00:05:30.459 }, 00:05:30.459 { 00:05:30.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.459 "dma_device_type": 2 00:05:30.459 } 00:05:30.459 ], 00:05:30.459 "driver_specific": {} 00:05:30.459 }, 00:05:30.459 { 00:05:30.459 "name": "Passthru0", 00:05:30.459 "aliases": [ 00:05:30.459 "7e042cb4-ee43-5461-b8f0-a770c800aff8" 00:05:30.459 ], 00:05:30.459 "product_name": "passthru", 00:05:30.459 "block_size": 512, 00:05:30.459 "num_blocks": 16384, 00:05:30.459 "uuid": "7e042cb4-ee43-5461-b8f0-a770c800aff8", 00:05:30.459 "assigned_rate_limits": { 00:05:30.459 "rw_ios_per_sec": 0, 00:05:30.459 "rw_mbytes_per_sec": 0, 00:05:30.459 "r_mbytes_per_sec": 0, 00:05:30.459 "w_mbytes_per_sec": 0 00:05:30.459 }, 00:05:30.459 "claimed": false, 00:05:30.459 "zoned": false, 00:05:30.459 "supported_io_types": { 00:05:30.460 "read": true, 00:05:30.460 "write": true, 00:05:30.460 "unmap": true, 00:05:30.460 "flush": true, 00:05:30.460 "reset": true, 00:05:30.460 "nvme_admin": false, 00:05:30.460 "nvme_io": false, 00:05:30.460 "nvme_io_md": false, 00:05:30.460 "write_zeroes": true, 00:05:30.460 "zcopy": true, 00:05:30.460 "get_zone_info": false, 00:05:30.460 "zone_management": false, 00:05:30.460 "zone_append": false, 00:05:30.460 "compare": false, 00:05:30.460 "compare_and_write": false, 00:05:30.460 "abort": true, 00:05:30.460 "seek_hole": false, 00:05:30.460 "seek_data": false, 00:05:30.460 "copy": true, 00:05:30.460 "nvme_iov_md": false 00:05:30.460 }, 00:05:30.460 "memory_domains": [ 00:05:30.460 { 00:05:30.460 "dma_device_id": "system", 00:05:30.460 "dma_device_type": 1 00:05:30.460 }, 00:05:30.460 { 00:05:30.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.460 "dma_device_type": 2 00:05:30.460 } 00:05:30.460 ], 00:05:30.460 "driver_specific": { 00:05:30.460 "passthru": { 00:05:30.460 "name": "Passthru0", 00:05:30.460 "base_bdev_name": "Malloc2" 00:05:30.460 } 00:05:30.460 } 00:05:30.460 } 00:05:30.460 ]' 00:05:30.460 16:16:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:30.460 16:16:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:30.718 16:16:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:30.718 16:16:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.718 16:16:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.718 16:16:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.718 16:16:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:30.718 16:16:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.718 16:16:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.718 16:16:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.718 16:16:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:30.718 16:16:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.718 16:16:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.718 16:16:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.718 16:16:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:30.718 16:16:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:30.718 16:16:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:30.718 00:05:30.718 real 0m0.294s 00:05:30.718 user 0m0.189s 00:05:30.718 sys 0m0.046s 00:05:30.718 16:16:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.718 16:16:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.718 ************************************ 00:05:30.718 END TEST rpc_daemon_integrity 00:05:30.718 ************************************ 00:05:30.718 16:16:16 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:30.718 16:16:16 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:30.718 16:16:16 rpc -- rpc/rpc.sh@84 -- # killprocess 1497984 00:05:30.718 16:16:16 rpc -- common/autotest_common.sh@948 -- # '[' -z 1497984 ']' 00:05:30.718 16:16:16 rpc -- common/autotest_common.sh@952 -- # kill -0 1497984 00:05:30.718 16:16:16 rpc -- common/autotest_common.sh@953 -- # uname 00:05:30.718 16:16:16 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:30.718 16:16:16 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1497984 00:05:30.718 16:16:16 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:30.718 16:16:16 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:30.718 16:16:16 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1497984' 00:05:30.718 killing process with pid 1497984 00:05:30.718 16:16:16 rpc -- common/autotest_common.sh@967 -- # kill 1497984 00:05:30.718 16:16:16 rpc -- common/autotest_common.sh@972 -- # wait 1497984 00:05:30.976 00:05:30.976 real 0m2.639s 00:05:30.976 user 0m3.362s 00:05:30.976 sys 0m0.804s 00:05:30.976 16:16:16 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.976 16:16:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.976 ************************************ 00:05:30.976 END TEST rpc 00:05:30.976 ************************************ 00:05:31.234 16:16:16 -- common/autotest_common.sh@1142 -- # return 0 00:05:31.234 16:16:16 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:31.234 16:16:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:31.234 16:16:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.234 16:16:16 -- common/autotest_common.sh@10 -- # set +x 00:05:31.234 ************************************ 00:05:31.234 START TEST skip_rpc 00:05:31.234 ************************************ 00:05:31.234 16:16:16 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:31.234 * Looking for test storage... 00:05:31.234 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:31.234 16:16:16 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:31.234 16:16:16 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:05:31.234 16:16:16 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:31.234 16:16:16 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:31.234 16:16:16 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.234 16:16:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.234 ************************************ 00:05:31.234 START TEST skip_rpc 00:05:31.234 ************************************ 00:05:31.234 16:16:16 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:31.234 16:16:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:31.234 16:16:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1498553 00:05:31.234 16:16:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:31.234 16:16:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:31.234 [2024-07-15 16:16:16.757142] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:05:31.234 [2024-07-15 16:16:16.757191] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1498553 ] 00:05:31.234 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.495 [2024-07-15 16:16:16.831538] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.495 [2024-07-15 16:16:16.912843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.762 16:16:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:36.762 16:16:21 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:36.762 16:16:21 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:36.762 16:16:21 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:36.762 16:16:21 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:36.762 16:16:21 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:36.762 16:16:21 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:36.762 16:16:21 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:36.762 16:16:21 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:36.762 16:16:21 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.762 16:16:21 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:36.762 16:16:21 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:36.762 16:16:21 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:36.762 16:16:21 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:36.762 16:16:21 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:36.762 16:16:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:36.762 16:16:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1498553 00:05:36.762 16:16:21 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 1498553 ']' 00:05:36.762 16:16:21 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 1498553 00:05:36.762 16:16:21 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:36.762 16:16:21 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:36.762 16:16:21 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1498553 00:05:36.762 16:16:21 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:36.762 16:16:21 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:36.762 16:16:21 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1498553' 00:05:36.762 killing process with pid 1498553 00:05:36.762 16:16:21 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 1498553 00:05:36.762 16:16:21 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 1498553 00:05:36.762 00:05:36.762 real 0m5.404s 00:05:36.762 user 0m5.143s 00:05:36.762 sys 0m0.290s 00:05:36.762 16:16:22 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.762 16:16:22 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.762 ************************************ 00:05:36.762 END TEST skip_rpc 00:05:36.762 ************************************ 00:05:36.762 16:16:22 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:36.762 16:16:22 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:36.762 16:16:22 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:36.762 16:16:22 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.762 16:16:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.762 ************************************ 00:05:36.762 START TEST skip_rpc_with_json 00:05:36.762 ************************************ 00:05:36.762 16:16:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:36.763 16:16:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:36.763 16:16:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1499320 00:05:36.763 16:16:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:36.763 16:16:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:36.763 16:16:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1499320 00:05:36.763 16:16:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 1499320 ']' 00:05:36.763 16:16:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.763 16:16:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:36.763 16:16:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.763 16:16:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:36.763 16:16:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:36.763 [2024-07-15 16:16:22.241905] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:05:36.763 [2024-07-15 16:16:22.241971] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1499320 ] 00:05:36.763 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.763 [2024-07-15 16:16:22.317121] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.021 [2024-07-15 16:16:22.412814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.588 16:16:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:37.588 16:16:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:37.588 16:16:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:37.588 16:16:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.588 16:16:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:37.588 [2024-07-15 16:16:23.078337] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:37.588 request: 00:05:37.588 { 00:05:37.588 "trtype": "tcp", 00:05:37.588 "method": "nvmf_get_transports", 00:05:37.588 "req_id": 1 00:05:37.588 } 00:05:37.588 Got JSON-RPC error response 00:05:37.588 response: 00:05:37.588 { 00:05:37.588 "code": -19, 00:05:37.588 "message": "No such device" 00:05:37.588 } 00:05:37.588 16:16:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:37.588 16:16:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:37.588 16:16:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.588 16:16:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:37.588 [2024-07-15 16:16:23.090430] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:37.588 16:16:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.588 16:16:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:37.588 16:16:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.588 16:16:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:37.846 16:16:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.846 16:16:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:37.846 { 00:05:37.846 "subsystems": [ 00:05:37.846 { 00:05:37.846 "subsystem": "scheduler", 00:05:37.846 "config": [ 00:05:37.846 { 00:05:37.846 "method": "framework_set_scheduler", 00:05:37.846 "params": { 00:05:37.846 "name": "static" 00:05:37.846 } 00:05:37.846 } 00:05:37.846 ] 00:05:37.846 }, 00:05:37.846 { 00:05:37.846 "subsystem": "vmd", 00:05:37.846 "config": [] 00:05:37.846 }, 00:05:37.846 { 00:05:37.846 "subsystem": "sock", 00:05:37.846 "config": [ 00:05:37.846 { 00:05:37.846 "method": "sock_set_default_impl", 00:05:37.846 "params": { 00:05:37.846 "impl_name": "posix" 00:05:37.846 } 00:05:37.846 }, 00:05:37.846 { 00:05:37.846 "method": "sock_impl_set_options", 00:05:37.846 "params": { 00:05:37.846 "impl_name": "ssl", 00:05:37.846 "recv_buf_size": 4096, 00:05:37.846 "send_buf_size": 4096, 00:05:37.846 "enable_recv_pipe": true, 00:05:37.846 "enable_quickack": false, 00:05:37.846 "enable_placement_id": 0, 00:05:37.846 "enable_zerocopy_send_server": true, 00:05:37.846 "enable_zerocopy_send_client": false, 00:05:37.846 "zerocopy_threshold": 0, 00:05:37.846 "tls_version": 0, 00:05:37.846 "enable_ktls": false 00:05:37.846 } 00:05:37.846 }, 00:05:37.846 { 00:05:37.846 "method": "sock_impl_set_options", 00:05:37.846 "params": { 00:05:37.846 "impl_name": "posix", 00:05:37.846 "recv_buf_size": 2097152, 00:05:37.846 "send_buf_size": 2097152, 00:05:37.846 "enable_recv_pipe": true, 00:05:37.846 "enable_quickack": false, 00:05:37.846 "enable_placement_id": 0, 00:05:37.846 "enable_zerocopy_send_server": true, 00:05:37.846 "enable_zerocopy_send_client": false, 00:05:37.846 "zerocopy_threshold": 0, 00:05:37.846 "tls_version": 0, 00:05:37.846 "enable_ktls": false 00:05:37.846 } 00:05:37.846 } 00:05:37.846 ] 00:05:37.846 }, 00:05:37.846 { 00:05:37.846 "subsystem": "iobuf", 00:05:37.846 "config": [ 00:05:37.846 { 00:05:37.846 "method": "iobuf_set_options", 00:05:37.846 "params": { 00:05:37.846 "small_pool_count": 8192, 00:05:37.846 "large_pool_count": 1024, 00:05:37.846 "small_bufsize": 8192, 00:05:37.846 "large_bufsize": 135168 00:05:37.846 } 00:05:37.846 } 00:05:37.846 ] 00:05:37.846 }, 00:05:37.846 { 00:05:37.846 "subsystem": "keyring", 00:05:37.846 "config": [] 00:05:37.846 }, 00:05:37.846 { 00:05:37.846 "subsystem": "vfio_user_target", 00:05:37.846 "config": null 00:05:37.846 }, 00:05:37.846 { 00:05:37.846 "subsystem": "accel", 00:05:37.846 "config": [ 00:05:37.846 { 00:05:37.846 "method": "accel_set_options", 00:05:37.846 "params": { 00:05:37.846 "small_cache_size": 128, 00:05:37.846 "large_cache_size": 16, 00:05:37.846 "task_count": 2048, 00:05:37.846 "sequence_count": 2048, 00:05:37.846 "buf_count": 2048 00:05:37.846 } 00:05:37.846 } 00:05:37.846 ] 00:05:37.846 }, 00:05:37.846 { 00:05:37.847 "subsystem": "bdev", 00:05:37.847 "config": [ 00:05:37.847 { 00:05:37.847 "method": "bdev_set_options", 00:05:37.847 "params": { 00:05:37.847 "bdev_io_pool_size": 65535, 00:05:37.847 "bdev_io_cache_size": 256, 00:05:37.847 "bdev_auto_examine": true, 00:05:37.847 "iobuf_small_cache_size": 128, 00:05:37.847 "iobuf_large_cache_size": 16 00:05:37.847 } 00:05:37.847 }, 00:05:37.847 { 00:05:37.847 "method": "bdev_raid_set_options", 00:05:37.847 "params": { 00:05:37.847 "process_window_size_kb": 1024 00:05:37.847 } 00:05:37.847 }, 00:05:37.847 { 00:05:37.847 "method": "bdev_nvme_set_options", 00:05:37.847 "params": { 00:05:37.847 "action_on_timeout": "none", 00:05:37.847 "timeout_us": 0, 00:05:37.847 "timeout_admin_us": 0, 00:05:37.847 "keep_alive_timeout_ms": 10000, 00:05:37.847 "arbitration_burst": 0, 00:05:37.847 "low_priority_weight": 0, 00:05:37.847 "medium_priority_weight": 0, 00:05:37.847 "high_priority_weight": 0, 00:05:37.847 "nvme_adminq_poll_period_us": 10000, 00:05:37.847 "nvme_ioq_poll_period_us": 0, 00:05:37.847 "io_queue_requests": 0, 00:05:37.847 "delay_cmd_submit": true, 00:05:37.847 "transport_retry_count": 4, 00:05:37.847 "bdev_retry_count": 3, 00:05:37.847 "transport_ack_timeout": 0, 00:05:37.847 "ctrlr_loss_timeout_sec": 0, 00:05:37.847 "reconnect_delay_sec": 0, 00:05:37.847 "fast_io_fail_timeout_sec": 0, 00:05:37.847 "disable_auto_failback": false, 00:05:37.847 "generate_uuids": false, 00:05:37.847 "transport_tos": 0, 00:05:37.847 "nvme_error_stat": false, 00:05:37.847 "rdma_srq_size": 0, 00:05:37.847 "io_path_stat": false, 00:05:37.847 "allow_accel_sequence": false, 00:05:37.847 "rdma_max_cq_size": 0, 00:05:37.847 "rdma_cm_event_timeout_ms": 0, 00:05:37.847 "dhchap_digests": [ 00:05:37.847 "sha256", 00:05:37.847 "sha384", 00:05:37.847 "sha512" 00:05:37.847 ], 00:05:37.847 "dhchap_dhgroups": [ 00:05:37.847 "null", 00:05:37.847 "ffdhe2048", 00:05:37.847 "ffdhe3072", 00:05:37.847 "ffdhe4096", 00:05:37.847 "ffdhe6144", 00:05:37.847 "ffdhe8192" 00:05:37.847 ] 00:05:37.847 } 00:05:37.847 }, 00:05:37.847 { 00:05:37.847 "method": "bdev_nvme_set_hotplug", 00:05:37.847 "params": { 00:05:37.847 "period_us": 100000, 00:05:37.847 "enable": false 00:05:37.847 } 00:05:37.847 }, 00:05:37.847 { 00:05:37.847 "method": "bdev_iscsi_set_options", 00:05:37.847 "params": { 00:05:37.847 "timeout_sec": 30 00:05:37.847 } 00:05:37.847 }, 00:05:37.847 { 00:05:37.847 "method": "bdev_wait_for_examine" 00:05:37.847 } 00:05:37.847 ] 00:05:37.847 }, 00:05:37.847 { 00:05:37.847 "subsystem": "nvmf", 00:05:37.847 "config": [ 00:05:37.847 { 00:05:37.847 "method": "nvmf_set_config", 00:05:37.847 "params": { 00:05:37.847 "discovery_filter": "match_any", 00:05:37.847 "admin_cmd_passthru": { 00:05:37.847 "identify_ctrlr": false 00:05:37.847 } 00:05:37.847 } 00:05:37.847 }, 00:05:37.847 { 00:05:37.847 "method": "nvmf_set_max_subsystems", 00:05:37.847 "params": { 00:05:37.847 "max_subsystems": 1024 00:05:37.847 } 00:05:37.847 }, 00:05:37.847 { 00:05:37.847 "method": "nvmf_set_crdt", 00:05:37.847 "params": { 00:05:37.847 "crdt1": 0, 00:05:37.847 "crdt2": 0, 00:05:37.847 "crdt3": 0 00:05:37.847 } 00:05:37.847 }, 00:05:37.847 { 00:05:37.847 "method": "nvmf_create_transport", 00:05:37.847 "params": { 00:05:37.847 "trtype": "TCP", 00:05:37.847 "max_queue_depth": 128, 00:05:37.847 "max_io_qpairs_per_ctrlr": 127, 00:05:37.847 "in_capsule_data_size": 4096, 00:05:37.847 "max_io_size": 131072, 00:05:37.847 "io_unit_size": 131072, 00:05:37.847 "max_aq_depth": 128, 00:05:37.847 "num_shared_buffers": 511, 00:05:37.847 "buf_cache_size": 4294967295, 00:05:37.847 "dif_insert_or_strip": false, 00:05:37.847 "zcopy": false, 00:05:37.847 "c2h_success": true, 00:05:37.847 "sock_priority": 0, 00:05:37.847 "abort_timeout_sec": 1, 00:05:37.847 "ack_timeout": 0, 00:05:37.847 "data_wr_pool_size": 0 00:05:37.847 } 00:05:37.847 } 00:05:37.847 ] 00:05:37.847 }, 00:05:37.847 { 00:05:37.847 "subsystem": "nbd", 00:05:37.847 "config": [] 00:05:37.847 }, 00:05:37.847 { 00:05:37.847 "subsystem": "ublk", 00:05:37.847 "config": [] 00:05:37.847 }, 00:05:37.847 { 00:05:37.847 "subsystem": "vhost_blk", 00:05:37.847 "config": [] 00:05:37.847 }, 00:05:37.847 { 00:05:37.847 "subsystem": "scsi", 00:05:37.847 "config": null 00:05:37.847 }, 00:05:37.847 { 00:05:37.847 "subsystem": "iscsi", 00:05:37.847 "config": [ 00:05:37.847 { 00:05:37.847 "method": "iscsi_set_options", 00:05:37.847 "params": { 00:05:37.847 "node_base": "iqn.2016-06.io.spdk", 00:05:37.847 "max_sessions": 128, 00:05:37.847 "max_connections_per_session": 2, 00:05:37.847 "max_queue_depth": 64, 00:05:37.847 "default_time2wait": 2, 00:05:37.847 "default_time2retain": 20, 00:05:37.847 "first_burst_length": 8192, 00:05:37.847 "immediate_data": true, 00:05:37.847 "allow_duplicated_isid": false, 00:05:37.847 "error_recovery_level": 0, 00:05:37.847 "nop_timeout": 60, 00:05:37.847 "nop_in_interval": 30, 00:05:37.847 "disable_chap": false, 00:05:37.847 "require_chap": false, 00:05:37.847 "mutual_chap": false, 00:05:37.847 "chap_group": 0, 00:05:37.847 "max_large_datain_per_connection": 64, 00:05:37.847 "max_r2t_per_connection": 4, 00:05:37.847 "pdu_pool_size": 36864, 00:05:37.847 "immediate_data_pool_size": 16384, 00:05:37.847 "data_out_pool_size": 2048 00:05:37.847 } 00:05:37.847 } 00:05:37.847 ] 00:05:37.847 }, 00:05:37.847 { 00:05:37.847 "subsystem": "vhost_scsi", 00:05:37.847 "config": [] 00:05:37.847 } 00:05:37.847 ] 00:05:37.847 } 00:05:37.847 16:16:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:37.847 16:16:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1499320 00:05:37.847 16:16:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 1499320 ']' 00:05:37.847 16:16:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 1499320 00:05:37.847 16:16:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:37.847 16:16:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:37.847 16:16:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1499320 00:05:37.847 16:16:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:37.847 16:16:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:37.847 16:16:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1499320' 00:05:37.847 killing process with pid 1499320 00:05:37.847 16:16:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 1499320 00:05:37.847 16:16:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 1499320 00:05:38.104 16:16:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1499556 00:05:38.104 16:16:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:38.104 16:16:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:43.368 16:16:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1499556 00:05:43.368 16:16:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 1499556 ']' 00:05:43.368 16:16:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 1499556 00:05:43.369 16:16:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:43.369 16:16:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:43.369 16:16:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1499556 00:05:43.369 16:16:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:43.369 16:16:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:43.369 16:16:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1499556' 00:05:43.369 killing process with pid 1499556 00:05:43.369 16:16:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 1499556 00:05:43.369 16:16:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 1499556 00:05:43.628 16:16:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:05:43.628 16:16:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:05:43.628 00:05:43.628 real 0m6.836s 00:05:43.628 user 0m6.607s 00:05:43.628 sys 0m0.669s 00:05:43.628 16:16:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.628 16:16:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:43.628 ************************************ 00:05:43.628 END TEST skip_rpc_with_json 00:05:43.628 ************************************ 00:05:43.628 16:16:29 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:43.628 16:16:29 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:43.628 16:16:29 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:43.628 16:16:29 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.628 16:16:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.628 ************************************ 00:05:43.628 START TEST skip_rpc_with_delay 00:05:43.628 ************************************ 00:05:43.628 16:16:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:43.628 16:16:29 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:43.628 16:16:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:43.628 16:16:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:43.628 16:16:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:43.628 16:16:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:43.628 16:16:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:43.628 16:16:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:43.628 16:16:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:43.628 16:16:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:43.628 16:16:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:43.628 16:16:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:43.628 16:16:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:43.628 [2024-07-15 16:16:29.145714] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:43.628 [2024-07-15 16:16:29.145825] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:43.628 16:16:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:43.628 16:16:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:43.628 16:16:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:43.628 16:16:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:43.628 00:05:43.628 real 0m0.031s 00:05:43.628 user 0m0.014s 00:05:43.628 sys 0m0.017s 00:05:43.628 16:16:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.628 16:16:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:43.628 ************************************ 00:05:43.628 END TEST skip_rpc_with_delay 00:05:43.628 ************************************ 00:05:43.628 16:16:29 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:43.628 16:16:29 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:43.628 16:16:29 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:43.628 16:16:29 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:43.628 16:16:29 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:43.628 16:16:29 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.628 16:16:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.887 ************************************ 00:05:43.887 START TEST exit_on_failed_rpc_init 00:05:43.887 ************************************ 00:05:43.887 16:16:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:43.887 16:16:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1500339 00:05:43.887 16:16:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1500339 00:05:43.887 16:16:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:43.887 16:16:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 1500339 ']' 00:05:43.887 16:16:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.887 16:16:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:43.887 16:16:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.887 16:16:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:43.887 16:16:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:43.887 [2024-07-15 16:16:29.267537] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:05:43.887 [2024-07-15 16:16:29.267621] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1500339 ] 00:05:43.887 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.887 [2024-07-15 16:16:29.340479] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.887 [2024-07-15 16:16:29.431509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.824 16:16:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:44.824 16:16:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:44.824 16:16:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:44.824 16:16:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:44.824 16:16:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:44.824 16:16:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:44.824 16:16:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:44.824 16:16:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:44.824 16:16:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:44.824 16:16:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:44.824 16:16:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:44.824 16:16:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:44.824 16:16:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:44.824 16:16:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:44.824 16:16:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:44.824 [2024-07-15 16:16:30.121167] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:05:44.824 [2024-07-15 16:16:30.121264] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1500361 ] 00:05:44.824 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.824 [2024-07-15 16:16:30.196187] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.824 [2024-07-15 16:16:30.280031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.824 [2024-07-15 16:16:30.280118] rpc.c: 181:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:44.824 [2024-07-15 16:16:30.280130] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:44.824 [2024-07-15 16:16:30.280139] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:44.824 16:16:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:44.824 16:16:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:44.824 16:16:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:44.824 16:16:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:44.824 16:16:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:44.824 16:16:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:44.824 16:16:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:44.824 16:16:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1500339 00:05:44.824 16:16:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 1500339 ']' 00:05:44.824 16:16:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 1500339 00:05:44.824 16:16:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:44.824 16:16:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:44.824 16:16:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1500339 00:05:45.083 16:16:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:45.083 16:16:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:45.083 16:16:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1500339' 00:05:45.083 killing process with pid 1500339 00:05:45.083 16:16:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 1500339 00:05:45.083 16:16:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 1500339 00:05:45.342 00:05:45.342 real 0m1.507s 00:05:45.342 user 0m1.686s 00:05:45.342 sys 0m0.458s 00:05:45.342 16:16:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.342 16:16:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:45.342 ************************************ 00:05:45.342 END TEST exit_on_failed_rpc_init 00:05:45.342 ************************************ 00:05:45.342 16:16:30 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:45.342 16:16:30 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:45.342 00:05:45.342 real 0m14.178s 00:05:45.342 user 0m13.595s 00:05:45.342 sys 0m1.715s 00:05:45.342 16:16:30 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.342 16:16:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.342 ************************************ 00:05:45.342 END TEST skip_rpc 00:05:45.342 ************************************ 00:05:45.342 16:16:30 -- common/autotest_common.sh@1142 -- # return 0 00:05:45.342 16:16:30 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:45.342 16:16:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:45.342 16:16:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.342 16:16:30 -- common/autotest_common.sh@10 -- # set +x 00:05:45.342 ************************************ 00:05:45.342 START TEST rpc_client 00:05:45.342 ************************************ 00:05:45.342 16:16:30 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:45.600 * Looking for test storage... 00:05:45.600 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client 00:05:45.600 16:16:30 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:45.600 OK 00:05:45.600 16:16:30 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:45.600 00:05:45.600 real 0m0.119s 00:05:45.600 user 0m0.046s 00:05:45.600 sys 0m0.082s 00:05:45.600 16:16:30 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.600 16:16:30 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:45.600 ************************************ 00:05:45.600 END TEST rpc_client 00:05:45.600 ************************************ 00:05:45.600 16:16:31 -- common/autotest_common.sh@1142 -- # return 0 00:05:45.600 16:16:31 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:05:45.600 16:16:31 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:45.600 16:16:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.600 16:16:31 -- common/autotest_common.sh@10 -- # set +x 00:05:45.600 ************************************ 00:05:45.600 START TEST json_config 00:05:45.600 ************************************ 00:05:45.600 16:16:31 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:05:45.601 16:16:31 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:05:45.601 16:16:31 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:45.601 16:16:31 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:45.601 16:16:31 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:45.601 16:16:31 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:45.601 16:16:31 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:45.601 16:16:31 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:45.601 16:16:31 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:45.601 16:16:31 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:45.601 16:16:31 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:45.601 16:16:31 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:45.601 16:16:31 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:45.601 16:16:31 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8023d868-666a-e711-906e-0017a4403562 00:05:45.601 16:16:31 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=8023d868-666a-e711-906e-0017a4403562 00:05:45.601 16:16:31 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:45.601 16:16:31 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:45.601 16:16:31 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:45.601 16:16:31 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:45.601 16:16:31 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:05:45.601 16:16:31 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:45.601 16:16:31 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:45.601 16:16:31 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:45.601 16:16:31 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.601 16:16:31 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.601 16:16:31 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.601 16:16:31 json_config -- paths/export.sh@5 -- # export PATH 00:05:45.601 16:16:31 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.601 16:16:31 json_config -- nvmf/common.sh@47 -- # : 0 00:05:45.601 16:16:31 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:45.601 16:16:31 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:45.601 16:16:31 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:45.601 16:16:31 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:45.601 16:16:31 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:45.601 16:16:31 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:45.601 16:16:31 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:45.601 16:16:31 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:45.601 16:16:31 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:05:45.601 16:16:31 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:45.601 16:16:31 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:45.601 16:16:31 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:45.601 16:16:31 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:45.601 16:16:31 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:45.601 WARNING: No tests are enabled so not running JSON configuration tests 00:05:45.601 16:16:31 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:45.601 00:05:45.601 real 0m0.080s 00:05:45.601 user 0m0.039s 00:05:45.601 sys 0m0.042s 00:05:45.601 16:16:31 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.601 16:16:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:45.601 ************************************ 00:05:45.601 END TEST json_config 00:05:45.601 ************************************ 00:05:45.858 16:16:31 -- common/autotest_common.sh@1142 -- # return 0 00:05:45.858 16:16:31 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:45.858 16:16:31 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:45.858 16:16:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.858 16:16:31 -- common/autotest_common.sh@10 -- # set +x 00:05:45.858 ************************************ 00:05:45.858 START TEST json_config_extra_key 00:05:45.858 ************************************ 00:05:45.858 16:16:31 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:45.858 16:16:31 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:05:45.858 16:16:31 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:45.858 16:16:31 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:45.859 16:16:31 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:45.859 16:16:31 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:45.859 16:16:31 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:45.859 16:16:31 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:45.859 16:16:31 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:45.859 16:16:31 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:45.859 16:16:31 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:45.859 16:16:31 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:45.859 16:16:31 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:45.859 16:16:31 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8023d868-666a-e711-906e-0017a4403562 00:05:45.859 16:16:31 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=8023d868-666a-e711-906e-0017a4403562 00:05:45.859 16:16:31 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:45.859 16:16:31 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:45.859 16:16:31 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:45.859 16:16:31 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:45.859 16:16:31 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:05:45.859 16:16:31 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:45.859 16:16:31 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:45.859 16:16:31 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:45.859 16:16:31 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.859 16:16:31 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.859 16:16:31 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.859 16:16:31 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:45.859 16:16:31 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.859 16:16:31 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:45.859 16:16:31 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:45.859 16:16:31 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:45.859 16:16:31 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:45.859 16:16:31 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:45.859 16:16:31 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:45.859 16:16:31 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:45.859 16:16:31 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:45.859 16:16:31 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:45.859 16:16:31 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:05:45.859 16:16:31 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:45.859 16:16:31 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:45.859 16:16:31 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:45.859 16:16:31 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:45.859 16:16:31 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:45.859 16:16:31 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:45.859 16:16:31 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:45.859 16:16:31 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:45.859 16:16:31 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:45.859 16:16:31 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:45.859 INFO: launching applications... 00:05:45.859 16:16:31 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:05:45.859 16:16:31 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:45.859 16:16:31 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:45.859 16:16:31 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:45.859 16:16:31 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:45.859 16:16:31 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:45.859 16:16:31 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:45.859 16:16:31 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:45.859 16:16:31 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1500677 00:05:45.859 16:16:31 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:45.859 Waiting for target to run... 00:05:45.859 16:16:31 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1500677 /var/tmp/spdk_tgt.sock 00:05:45.859 16:16:31 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 1500677 ']' 00:05:45.859 16:16:31 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:05:45.859 16:16:31 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:45.859 16:16:31 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:45.859 16:16:31 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:45.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:45.859 16:16:31 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:45.859 16:16:31 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:45.859 [2024-07-15 16:16:31.362075] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:05:45.859 [2024-07-15 16:16:31.362171] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1500677 ] 00:05:45.859 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.424 [2024-07-15 16:16:31.837397] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.424 [2024-07-15 16:16:31.924978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.684 16:16:32 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:46.684 16:16:32 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:46.684 16:16:32 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:46.684 00:05:46.684 16:16:32 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:46.684 INFO: shutting down applications... 00:05:46.684 16:16:32 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:46.684 16:16:32 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:46.684 16:16:32 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:46.684 16:16:32 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1500677 ]] 00:05:46.684 16:16:32 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1500677 00:05:46.684 16:16:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:46.684 16:16:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:46.684 16:16:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1500677 00:05:46.684 16:16:32 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:47.251 16:16:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:47.251 16:16:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:47.251 16:16:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1500677 00:05:47.251 16:16:32 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:47.251 16:16:32 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:47.251 16:16:32 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:47.251 16:16:32 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:47.251 SPDK target shutdown done 00:05:47.251 16:16:32 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:47.251 Success 00:05:47.251 00:05:47.251 real 0m1.465s 00:05:47.251 user 0m1.064s 00:05:47.251 sys 0m0.578s 00:05:47.251 16:16:32 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.251 16:16:32 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:47.251 ************************************ 00:05:47.251 END TEST json_config_extra_key 00:05:47.251 ************************************ 00:05:47.251 16:16:32 -- common/autotest_common.sh@1142 -- # return 0 00:05:47.251 16:16:32 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:47.251 16:16:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:47.251 16:16:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.251 16:16:32 -- common/autotest_common.sh@10 -- # set +x 00:05:47.251 ************************************ 00:05:47.251 START TEST alias_rpc 00:05:47.251 ************************************ 00:05:47.251 16:16:32 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:47.510 * Looking for test storage... 00:05:47.510 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc 00:05:47.510 16:16:32 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:47.510 16:16:32 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1500964 00:05:47.510 16:16:32 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1500964 00:05:47.510 16:16:32 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:47.510 16:16:32 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 1500964 ']' 00:05:47.510 16:16:32 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.510 16:16:32 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:47.510 16:16:32 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.510 16:16:32 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:47.510 16:16:32 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.510 [2024-07-15 16:16:32.882074] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:05:47.510 [2024-07-15 16:16:32.882160] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1500964 ] 00:05:47.510 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.510 [2024-07-15 16:16:32.959356] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.510 [2024-07-15 16:16:33.044233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.452 16:16:33 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:48.452 16:16:33 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:48.452 16:16:33 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:48.452 16:16:33 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1500964 00:05:48.452 16:16:33 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 1500964 ']' 00:05:48.452 16:16:33 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 1500964 00:05:48.452 16:16:33 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:48.452 16:16:33 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:48.452 16:16:33 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1500964 00:05:48.452 16:16:33 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:48.452 16:16:33 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:48.452 16:16:33 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1500964' 00:05:48.452 killing process with pid 1500964 00:05:48.452 16:16:33 alias_rpc -- common/autotest_common.sh@967 -- # kill 1500964 00:05:48.452 16:16:33 alias_rpc -- common/autotest_common.sh@972 -- # wait 1500964 00:05:49.020 00:05:49.020 real 0m1.539s 00:05:49.020 user 0m1.640s 00:05:49.020 sys 0m0.456s 00:05:49.020 16:16:34 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.020 16:16:34 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.020 ************************************ 00:05:49.020 END TEST alias_rpc 00:05:49.020 ************************************ 00:05:49.020 16:16:34 -- common/autotest_common.sh@1142 -- # return 0 00:05:49.020 16:16:34 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:49.020 16:16:34 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:49.020 16:16:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:49.020 16:16:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.020 16:16:34 -- common/autotest_common.sh@10 -- # set +x 00:05:49.020 ************************************ 00:05:49.020 START TEST spdkcli_tcp 00:05:49.020 ************************************ 00:05:49.020 16:16:34 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:49.020 * Looking for test storage... 00:05:49.020 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli 00:05:49.020 16:16:34 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/common.sh 00:05:49.020 16:16:34 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:49.020 16:16:34 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/clear_config.py 00:05:49.020 16:16:34 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:49.020 16:16:34 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:49.020 16:16:34 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:49.020 16:16:34 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:49.020 16:16:34 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:49.020 16:16:34 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:49.020 16:16:34 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1501299 00:05:49.020 16:16:34 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1501299 00:05:49.020 16:16:34 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:49.020 16:16:34 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 1501299 ']' 00:05:49.020 16:16:34 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.020 16:16:34 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:49.020 16:16:34 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.020 16:16:34 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:49.020 16:16:34 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:49.020 [2024-07-15 16:16:34.500695] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:05:49.020 [2024-07-15 16:16:34.500766] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1501299 ] 00:05:49.020 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.020 [2024-07-15 16:16:34.574960] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:49.278 [2024-07-15 16:16:34.657692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.279 [2024-07-15 16:16:34.657694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.845 16:16:35 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:49.845 16:16:35 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:05:49.845 16:16:35 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1501316 00:05:49.845 16:16:35 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:49.845 16:16:35 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:50.105 [ 00:05:50.105 "spdk_get_version", 00:05:50.105 "rpc_get_methods", 00:05:50.105 "trace_get_info", 00:05:50.105 "trace_get_tpoint_group_mask", 00:05:50.105 "trace_disable_tpoint_group", 00:05:50.105 "trace_enable_tpoint_group", 00:05:50.105 "trace_clear_tpoint_mask", 00:05:50.105 "trace_set_tpoint_mask", 00:05:50.105 "vfu_tgt_set_base_path", 00:05:50.105 "framework_get_pci_devices", 00:05:50.105 "framework_get_config", 00:05:50.105 "framework_get_subsystems", 00:05:50.105 "keyring_get_keys", 00:05:50.105 "iobuf_get_stats", 00:05:50.105 "iobuf_set_options", 00:05:50.105 "sock_get_default_impl", 00:05:50.105 "sock_set_default_impl", 00:05:50.105 "sock_impl_set_options", 00:05:50.105 "sock_impl_get_options", 00:05:50.105 "vmd_rescan", 00:05:50.105 "vmd_remove_device", 00:05:50.105 "vmd_enable", 00:05:50.105 "accel_get_stats", 00:05:50.105 "accel_set_options", 00:05:50.105 "accel_set_driver", 00:05:50.105 "accel_crypto_key_destroy", 00:05:50.105 "accel_crypto_keys_get", 00:05:50.105 "accel_crypto_key_create", 00:05:50.105 "accel_assign_opc", 00:05:50.105 "accel_get_module_info", 00:05:50.105 "accel_get_opc_assignments", 00:05:50.105 "notify_get_notifications", 00:05:50.105 "notify_get_types", 00:05:50.105 "bdev_get_histogram", 00:05:50.105 "bdev_enable_histogram", 00:05:50.105 "bdev_set_qos_limit", 00:05:50.105 "bdev_set_qd_sampling_period", 00:05:50.105 "bdev_get_bdevs", 00:05:50.105 "bdev_reset_iostat", 00:05:50.105 "bdev_get_iostat", 00:05:50.105 "bdev_examine", 00:05:50.105 "bdev_wait_for_examine", 00:05:50.105 "bdev_set_options", 00:05:50.105 "scsi_get_devices", 00:05:50.105 "thread_set_cpumask", 00:05:50.105 "framework_get_governor", 00:05:50.105 "framework_get_scheduler", 00:05:50.105 "framework_set_scheduler", 00:05:50.105 "framework_get_reactors", 00:05:50.105 "thread_get_io_channels", 00:05:50.105 "thread_get_pollers", 00:05:50.105 "thread_get_stats", 00:05:50.105 "framework_monitor_context_switch", 00:05:50.105 "spdk_kill_instance", 00:05:50.105 "log_enable_timestamps", 00:05:50.105 "log_get_flags", 00:05:50.105 "log_clear_flag", 00:05:50.105 "log_set_flag", 00:05:50.105 "log_get_level", 00:05:50.105 "log_set_level", 00:05:50.105 "log_get_print_level", 00:05:50.105 "log_set_print_level", 00:05:50.105 "framework_enable_cpumask_locks", 00:05:50.105 "framework_disable_cpumask_locks", 00:05:50.105 "framework_wait_init", 00:05:50.105 "framework_start_init", 00:05:50.105 "virtio_blk_create_transport", 00:05:50.105 "virtio_blk_get_transports", 00:05:50.105 "vhost_controller_set_coalescing", 00:05:50.105 "vhost_get_controllers", 00:05:50.105 "vhost_delete_controller", 00:05:50.105 "vhost_create_blk_controller", 00:05:50.105 "vhost_scsi_controller_remove_target", 00:05:50.105 "vhost_scsi_controller_add_target", 00:05:50.105 "vhost_start_scsi_controller", 00:05:50.105 "vhost_create_scsi_controller", 00:05:50.105 "ublk_recover_disk", 00:05:50.105 "ublk_get_disks", 00:05:50.105 "ublk_stop_disk", 00:05:50.105 "ublk_start_disk", 00:05:50.105 "ublk_destroy_target", 00:05:50.105 "ublk_create_target", 00:05:50.105 "nbd_get_disks", 00:05:50.105 "nbd_stop_disk", 00:05:50.105 "nbd_start_disk", 00:05:50.105 "env_dpdk_get_mem_stats", 00:05:50.105 "nvmf_stop_mdns_prr", 00:05:50.105 "nvmf_publish_mdns_prr", 00:05:50.105 "nvmf_subsystem_get_listeners", 00:05:50.105 "nvmf_subsystem_get_qpairs", 00:05:50.105 "nvmf_subsystem_get_controllers", 00:05:50.105 "nvmf_get_stats", 00:05:50.105 "nvmf_get_transports", 00:05:50.105 "nvmf_create_transport", 00:05:50.105 "nvmf_get_targets", 00:05:50.105 "nvmf_delete_target", 00:05:50.105 "nvmf_create_target", 00:05:50.105 "nvmf_subsystem_allow_any_host", 00:05:50.105 "nvmf_subsystem_remove_host", 00:05:50.105 "nvmf_subsystem_add_host", 00:05:50.105 "nvmf_ns_remove_host", 00:05:50.105 "nvmf_ns_add_host", 00:05:50.105 "nvmf_subsystem_remove_ns", 00:05:50.105 "nvmf_subsystem_add_ns", 00:05:50.105 "nvmf_subsystem_listener_set_ana_state", 00:05:50.105 "nvmf_discovery_get_referrals", 00:05:50.105 "nvmf_discovery_remove_referral", 00:05:50.105 "nvmf_discovery_add_referral", 00:05:50.105 "nvmf_subsystem_remove_listener", 00:05:50.105 "nvmf_subsystem_add_listener", 00:05:50.105 "nvmf_delete_subsystem", 00:05:50.105 "nvmf_create_subsystem", 00:05:50.105 "nvmf_get_subsystems", 00:05:50.105 "nvmf_set_crdt", 00:05:50.105 "nvmf_set_config", 00:05:50.105 "nvmf_set_max_subsystems", 00:05:50.105 "iscsi_get_histogram", 00:05:50.105 "iscsi_enable_histogram", 00:05:50.105 "iscsi_set_options", 00:05:50.105 "iscsi_get_auth_groups", 00:05:50.105 "iscsi_auth_group_remove_secret", 00:05:50.105 "iscsi_auth_group_add_secret", 00:05:50.105 "iscsi_delete_auth_group", 00:05:50.105 "iscsi_create_auth_group", 00:05:50.105 "iscsi_set_discovery_auth", 00:05:50.105 "iscsi_get_options", 00:05:50.105 "iscsi_target_node_request_logout", 00:05:50.105 "iscsi_target_node_set_redirect", 00:05:50.105 "iscsi_target_node_set_auth", 00:05:50.105 "iscsi_target_node_add_lun", 00:05:50.105 "iscsi_get_stats", 00:05:50.105 "iscsi_get_connections", 00:05:50.105 "iscsi_portal_group_set_auth", 00:05:50.105 "iscsi_start_portal_group", 00:05:50.105 "iscsi_delete_portal_group", 00:05:50.105 "iscsi_create_portal_group", 00:05:50.105 "iscsi_get_portal_groups", 00:05:50.105 "iscsi_delete_target_node", 00:05:50.105 "iscsi_target_node_remove_pg_ig_maps", 00:05:50.105 "iscsi_target_node_add_pg_ig_maps", 00:05:50.105 "iscsi_create_target_node", 00:05:50.105 "iscsi_get_target_nodes", 00:05:50.105 "iscsi_delete_initiator_group", 00:05:50.105 "iscsi_initiator_group_remove_initiators", 00:05:50.105 "iscsi_initiator_group_add_initiators", 00:05:50.105 "iscsi_create_initiator_group", 00:05:50.105 "iscsi_get_initiator_groups", 00:05:50.105 "keyring_linux_set_options", 00:05:50.105 "keyring_file_remove_key", 00:05:50.105 "keyring_file_add_key", 00:05:50.105 "vfu_virtio_create_scsi_endpoint", 00:05:50.105 "vfu_virtio_scsi_remove_target", 00:05:50.105 "vfu_virtio_scsi_add_target", 00:05:50.105 "vfu_virtio_create_blk_endpoint", 00:05:50.105 "vfu_virtio_delete_endpoint", 00:05:50.105 "iaa_scan_accel_module", 00:05:50.105 "dsa_scan_accel_module", 00:05:50.105 "ioat_scan_accel_module", 00:05:50.105 "accel_error_inject_error", 00:05:50.105 "bdev_iscsi_delete", 00:05:50.105 "bdev_iscsi_create", 00:05:50.105 "bdev_iscsi_set_options", 00:05:50.105 "bdev_virtio_attach_controller", 00:05:50.105 "bdev_virtio_scsi_get_devices", 00:05:50.105 "bdev_virtio_detach_controller", 00:05:50.105 "bdev_virtio_blk_set_hotplug", 00:05:50.105 "bdev_ftl_set_property", 00:05:50.105 "bdev_ftl_get_properties", 00:05:50.105 "bdev_ftl_get_stats", 00:05:50.105 "bdev_ftl_unmap", 00:05:50.105 "bdev_ftl_unload", 00:05:50.105 "bdev_ftl_delete", 00:05:50.105 "bdev_ftl_load", 00:05:50.105 "bdev_ftl_create", 00:05:50.105 "bdev_aio_delete", 00:05:50.105 "bdev_aio_rescan", 00:05:50.105 "bdev_aio_create", 00:05:50.105 "blobfs_create", 00:05:50.105 "blobfs_detect", 00:05:50.105 "blobfs_set_cache_size", 00:05:50.105 "bdev_zone_block_delete", 00:05:50.105 "bdev_zone_block_create", 00:05:50.105 "bdev_delay_delete", 00:05:50.105 "bdev_delay_create", 00:05:50.105 "bdev_delay_update_latency", 00:05:50.105 "bdev_split_delete", 00:05:50.105 "bdev_split_create", 00:05:50.105 "bdev_error_inject_error", 00:05:50.105 "bdev_error_delete", 00:05:50.105 "bdev_error_create", 00:05:50.105 "bdev_raid_set_options", 00:05:50.105 "bdev_raid_remove_base_bdev", 00:05:50.105 "bdev_raid_add_base_bdev", 00:05:50.105 "bdev_raid_delete", 00:05:50.105 "bdev_raid_create", 00:05:50.105 "bdev_raid_get_bdevs", 00:05:50.105 "bdev_lvol_set_parent_bdev", 00:05:50.105 "bdev_lvol_set_parent", 00:05:50.105 "bdev_lvol_check_shallow_copy", 00:05:50.105 "bdev_lvol_start_shallow_copy", 00:05:50.105 "bdev_lvol_grow_lvstore", 00:05:50.105 "bdev_lvol_get_lvols", 00:05:50.105 "bdev_lvol_get_lvstores", 00:05:50.105 "bdev_lvol_delete", 00:05:50.105 "bdev_lvol_set_read_only", 00:05:50.105 "bdev_lvol_resize", 00:05:50.105 "bdev_lvol_decouple_parent", 00:05:50.106 "bdev_lvol_inflate", 00:05:50.106 "bdev_lvol_rename", 00:05:50.106 "bdev_lvol_clone_bdev", 00:05:50.106 "bdev_lvol_clone", 00:05:50.106 "bdev_lvol_snapshot", 00:05:50.106 "bdev_lvol_create", 00:05:50.106 "bdev_lvol_delete_lvstore", 00:05:50.106 "bdev_lvol_rename_lvstore", 00:05:50.106 "bdev_lvol_create_lvstore", 00:05:50.106 "bdev_passthru_delete", 00:05:50.106 "bdev_passthru_create", 00:05:50.106 "bdev_nvme_cuse_unregister", 00:05:50.106 "bdev_nvme_cuse_register", 00:05:50.106 "bdev_opal_new_user", 00:05:50.106 "bdev_opal_set_lock_state", 00:05:50.106 "bdev_opal_delete", 00:05:50.106 "bdev_opal_get_info", 00:05:50.106 "bdev_opal_create", 00:05:50.106 "bdev_nvme_opal_revert", 00:05:50.106 "bdev_nvme_opal_init", 00:05:50.106 "bdev_nvme_send_cmd", 00:05:50.106 "bdev_nvme_get_path_iostat", 00:05:50.106 "bdev_nvme_get_mdns_discovery_info", 00:05:50.106 "bdev_nvme_stop_mdns_discovery", 00:05:50.106 "bdev_nvme_start_mdns_discovery", 00:05:50.106 "bdev_nvme_set_multipath_policy", 00:05:50.106 "bdev_nvme_set_preferred_path", 00:05:50.106 "bdev_nvme_get_io_paths", 00:05:50.106 "bdev_nvme_remove_error_injection", 00:05:50.106 "bdev_nvme_add_error_injection", 00:05:50.106 "bdev_nvme_get_discovery_info", 00:05:50.106 "bdev_nvme_stop_discovery", 00:05:50.106 "bdev_nvme_start_discovery", 00:05:50.106 "bdev_nvme_get_controller_health_info", 00:05:50.106 "bdev_nvme_disable_controller", 00:05:50.106 "bdev_nvme_enable_controller", 00:05:50.106 "bdev_nvme_reset_controller", 00:05:50.106 "bdev_nvme_get_transport_statistics", 00:05:50.106 "bdev_nvme_apply_firmware", 00:05:50.106 "bdev_nvme_detach_controller", 00:05:50.106 "bdev_nvme_get_controllers", 00:05:50.106 "bdev_nvme_attach_controller", 00:05:50.106 "bdev_nvme_set_hotplug", 00:05:50.106 "bdev_nvme_set_options", 00:05:50.106 "bdev_null_resize", 00:05:50.106 "bdev_null_delete", 00:05:50.106 "bdev_null_create", 00:05:50.106 "bdev_malloc_delete", 00:05:50.106 "bdev_malloc_create" 00:05:50.106 ] 00:05:50.106 16:16:35 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:50.106 16:16:35 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:50.106 16:16:35 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:50.106 16:16:35 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:50.106 16:16:35 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1501299 00:05:50.106 16:16:35 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 1501299 ']' 00:05:50.106 16:16:35 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 1501299 00:05:50.106 16:16:35 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:05:50.106 16:16:35 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:50.106 16:16:35 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1501299 00:05:50.106 16:16:35 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:50.106 16:16:35 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:50.106 16:16:35 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1501299' 00:05:50.106 killing process with pid 1501299 00:05:50.106 16:16:35 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 1501299 00:05:50.106 16:16:35 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 1501299 00:05:50.365 00:05:50.365 real 0m1.558s 00:05:50.365 user 0m2.863s 00:05:50.365 sys 0m0.491s 00:05:50.365 16:16:35 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.365 16:16:35 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:50.365 ************************************ 00:05:50.365 END TEST spdkcli_tcp 00:05:50.365 ************************************ 00:05:50.624 16:16:35 -- common/autotest_common.sh@1142 -- # return 0 00:05:50.624 16:16:35 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:50.624 16:16:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:50.624 16:16:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.624 16:16:35 -- common/autotest_common.sh@10 -- # set +x 00:05:50.624 ************************************ 00:05:50.624 START TEST dpdk_mem_utility 00:05:50.624 ************************************ 00:05:50.624 16:16:36 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:50.624 * Looking for test storage... 00:05:50.624 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility 00:05:50.624 16:16:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:50.624 16:16:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1501552 00:05:50.624 16:16:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1501552 00:05:50.624 16:16:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:50.624 16:16:36 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 1501552 ']' 00:05:50.624 16:16:36 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.624 16:16:36 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:50.624 16:16:36 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.624 16:16:36 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:50.624 16:16:36 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:50.624 [2024-07-15 16:16:36.135748] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:05:50.624 [2024-07-15 16:16:36.135817] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1501552 ] 00:05:50.624 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.884 [2024-07-15 16:16:36.210029] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.884 [2024-07-15 16:16:36.290863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.451 16:16:36 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:51.451 16:16:36 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:51.451 16:16:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:51.451 16:16:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:51.451 16:16:36 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.451 16:16:36 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:51.451 { 00:05:51.451 "filename": "/tmp/spdk_mem_dump.txt" 00:05:51.451 } 00:05:51.451 16:16:36 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.451 16:16:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:51.711 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:51.711 1 heaps totaling size 814.000000 MiB 00:05:51.711 size: 814.000000 MiB heap id: 0 00:05:51.711 end heaps---------- 00:05:51.711 8 mempools totaling size 598.116089 MiB 00:05:51.711 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:51.711 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:51.711 size: 84.521057 MiB name: bdev_io_1501552 00:05:51.712 size: 51.011292 MiB name: evtpool_1501552 00:05:51.712 size: 50.003479 MiB name: msgpool_1501552 00:05:51.712 size: 21.763794 MiB name: PDU_Pool 00:05:51.712 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:51.712 size: 0.026123 MiB name: Session_Pool 00:05:51.712 end mempools------- 00:05:51.712 6 memzones totaling size 4.142822 MiB 00:05:51.712 size: 1.000366 MiB name: RG_ring_0_1501552 00:05:51.712 size: 1.000366 MiB name: RG_ring_1_1501552 00:05:51.712 size: 1.000366 MiB name: RG_ring_4_1501552 00:05:51.712 size: 1.000366 MiB name: RG_ring_5_1501552 00:05:51.712 size: 0.125366 MiB name: RG_ring_2_1501552 00:05:51.712 size: 0.015991 MiB name: RG_ring_3_1501552 00:05:51.712 end memzones------- 00:05:51.712 16:16:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:51.712 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:51.712 list of free elements. size: 12.519348 MiB 00:05:51.712 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:51.712 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:51.712 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:51.712 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:51.712 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:51.712 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:51.712 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:51.712 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:51.712 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:51.712 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:51.712 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:51.712 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:51.712 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:51.712 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:51.712 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:51.712 list of standard malloc elements. size: 199.218079 MiB 00:05:51.712 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:51.712 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:51.712 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:51.712 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:51.712 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:51.712 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:51.712 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:51.712 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:51.712 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:51.712 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:51.712 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:51.712 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:51.712 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:51.712 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:51.712 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:51.712 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:51.712 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:51.712 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:51.712 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:51.712 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:51.712 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:51.712 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:51.712 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:51.712 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:51.712 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:51.712 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:51.712 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:51.712 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:51.712 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:51.712 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:51.712 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:51.712 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:51.712 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:51.712 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:51.712 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:51.712 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:51.712 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:51.712 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:51.712 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:51.712 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:51.712 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:51.712 list of memzone associated elements. size: 602.262573 MiB 00:05:51.712 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:51.712 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:51.712 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:51.712 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:51.712 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:51.712 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1501552_0 00:05:51.712 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:51.712 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1501552_0 00:05:51.712 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:51.712 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1501552_0 00:05:51.712 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:51.712 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:51.712 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:51.712 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:51.712 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:51.712 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1501552 00:05:51.712 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:51.712 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1501552 00:05:51.712 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:51.712 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1501552 00:05:51.712 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:51.712 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:51.712 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:51.712 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:51.712 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:51.712 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:51.712 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:51.712 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:51.712 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:51.712 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1501552 00:05:51.712 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:51.712 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1501552 00:05:51.712 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:51.712 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1501552 00:05:51.712 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:51.712 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1501552 00:05:51.712 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:51.712 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1501552 00:05:51.712 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:51.712 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:51.712 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:51.712 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:51.712 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:51.712 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:51.712 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:51.712 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1501552 00:05:51.712 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:51.712 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:51.712 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:51.712 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:51.712 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:51.712 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1501552 00:05:51.712 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:51.712 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:51.712 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:51.712 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1501552 00:05:51.712 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:51.712 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1501552 00:05:51.712 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:51.712 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:51.712 16:16:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:51.712 16:16:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1501552 00:05:51.712 16:16:37 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 1501552 ']' 00:05:51.712 16:16:37 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 1501552 00:05:51.712 16:16:37 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:51.712 16:16:37 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:51.712 16:16:37 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1501552 00:05:51.712 16:16:37 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:51.712 16:16:37 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:51.712 16:16:37 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1501552' 00:05:51.712 killing process with pid 1501552 00:05:51.712 16:16:37 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 1501552 00:05:51.712 16:16:37 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 1501552 00:05:51.972 00:05:51.972 real 0m1.455s 00:05:51.972 user 0m1.486s 00:05:51.972 sys 0m0.454s 00:05:51.972 16:16:37 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.972 16:16:37 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:51.972 ************************************ 00:05:51.972 END TEST dpdk_mem_utility 00:05:51.972 ************************************ 00:05:51.972 16:16:37 -- common/autotest_common.sh@1142 -- # return 0 00:05:51.972 16:16:37 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:05:51.972 16:16:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:51.972 16:16:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.972 16:16:37 -- common/autotest_common.sh@10 -- # set +x 00:05:52.231 ************************************ 00:05:52.231 START TEST event 00:05:52.231 ************************************ 00:05:52.231 16:16:37 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:05:52.231 * Looking for test storage... 00:05:52.231 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:05:52.231 16:16:37 event -- event/event.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:52.231 16:16:37 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:52.231 16:16:37 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:52.231 16:16:37 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:52.231 16:16:37 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.231 16:16:37 event -- common/autotest_common.sh@10 -- # set +x 00:05:52.231 ************************************ 00:05:52.231 START TEST event_perf 00:05:52.231 ************************************ 00:05:52.231 16:16:37 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:52.231 Running I/O for 1 seconds...[2024-07-15 16:16:37.726177] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:05:52.231 [2024-07-15 16:16:37.726290] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1501782 ] 00:05:52.231 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.231 [2024-07-15 16:16:37.805062] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:52.491 [2024-07-15 16:16:37.889816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.491 [2024-07-15 16:16:37.889900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:52.491 [2024-07-15 16:16:37.889919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:52.491 [2024-07-15 16:16:37.889921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.428 Running I/O for 1 seconds... 00:05:53.428 lcore 0: 191619 00:05:53.428 lcore 1: 191617 00:05:53.428 lcore 2: 191617 00:05:53.428 lcore 3: 191618 00:05:53.428 done. 00:05:53.428 00:05:53.428 real 0m1.260s 00:05:53.428 user 0m4.143s 00:05:53.428 sys 0m0.110s 00:05:53.428 16:16:38 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.428 16:16:38 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:53.428 ************************************ 00:05:53.428 END TEST event_perf 00:05:53.428 ************************************ 00:05:53.687 16:16:39 event -- common/autotest_common.sh@1142 -- # return 0 00:05:53.687 16:16:39 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:53.687 16:16:39 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:53.687 16:16:39 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.687 16:16:39 event -- common/autotest_common.sh@10 -- # set +x 00:05:53.687 ************************************ 00:05:53.687 START TEST event_reactor 00:05:53.687 ************************************ 00:05:53.687 16:16:39 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:53.687 [2024-07-15 16:16:39.054198] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:05:53.687 [2024-07-15 16:16:39.054280] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1501989 ] 00:05:53.687 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.687 [2024-07-15 16:16:39.130820] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.687 [2024-07-15 16:16:39.209938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.062 test_start 00:05:55.062 oneshot 00:05:55.062 tick 100 00:05:55.062 tick 100 00:05:55.062 tick 250 00:05:55.062 tick 100 00:05:55.062 tick 100 00:05:55.062 tick 100 00:05:55.062 tick 250 00:05:55.062 tick 500 00:05:55.062 tick 100 00:05:55.062 tick 100 00:05:55.062 tick 250 00:05:55.062 tick 100 00:05:55.062 tick 100 00:05:55.062 test_end 00:05:55.062 00:05:55.062 real 0m1.247s 00:05:55.062 user 0m1.146s 00:05:55.062 sys 0m0.096s 00:05:55.062 16:16:40 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.062 16:16:40 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:55.062 ************************************ 00:05:55.063 END TEST event_reactor 00:05:55.063 ************************************ 00:05:55.063 16:16:40 event -- common/autotest_common.sh@1142 -- # return 0 00:05:55.063 16:16:40 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:55.063 16:16:40 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:55.063 16:16:40 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.063 16:16:40 event -- common/autotest_common.sh@10 -- # set +x 00:05:55.063 ************************************ 00:05:55.063 START TEST event_reactor_perf 00:05:55.063 ************************************ 00:05:55.063 16:16:40 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:55.063 [2024-07-15 16:16:40.364256] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:05:55.063 [2024-07-15 16:16:40.364310] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1502185 ] 00:05:55.063 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.063 [2024-07-15 16:16:40.443914] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.063 [2024-07-15 16:16:40.526038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.440 test_start 00:05:56.440 test_end 00:05:56.440 Performance: 954715 events per second 00:05:56.440 00:05:56.440 real 0m1.241s 00:05:56.440 user 0m1.145s 00:05:56.440 sys 0m0.092s 00:05:56.440 16:16:41 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.440 16:16:41 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:56.440 ************************************ 00:05:56.440 END TEST event_reactor_perf 00:05:56.440 ************************************ 00:05:56.440 16:16:41 event -- common/autotest_common.sh@1142 -- # return 0 00:05:56.440 16:16:41 event -- event/event.sh@49 -- # uname -s 00:05:56.440 16:16:41 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:56.440 16:16:41 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:56.440 16:16:41 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:56.440 16:16:41 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.440 16:16:41 event -- common/autotest_common.sh@10 -- # set +x 00:05:56.440 ************************************ 00:05:56.440 START TEST event_scheduler 00:05:56.441 ************************************ 00:05:56.441 16:16:41 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:56.441 * Looking for test storage... 00:05:56.441 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler 00:05:56.441 16:16:41 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:56.441 16:16:41 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1502405 00:05:56.441 16:16:41 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:56.441 16:16:41 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:56.441 16:16:41 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1502405 00:05:56.441 16:16:41 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 1502405 ']' 00:05:56.441 16:16:41 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.441 16:16:41 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:56.441 16:16:41 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.441 16:16:41 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:56.441 16:16:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:56.441 [2024-07-15 16:16:41.785462] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:05:56.441 [2024-07-15 16:16:41.785549] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1502405 ] 00:05:56.441 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.441 [2024-07-15 16:16:41.857062] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:56.441 [2024-07-15 16:16:41.939571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.441 [2024-07-15 16:16:41.939596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.441 [2024-07-15 16:16:41.939682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:56.441 [2024-07-15 16:16:41.939684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:57.377 16:16:42 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:57.377 16:16:42 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:57.377 16:16:42 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:57.377 16:16:42 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.377 16:16:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:57.377 [2024-07-15 16:16:42.638125] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:57.377 [2024-07-15 16:16:42.638149] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:57.377 [2024-07-15 16:16:42.638162] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:57.377 [2024-07-15 16:16:42.638170] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:57.377 [2024-07-15 16:16:42.638178] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:57.377 16:16:42 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.377 16:16:42 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:57.377 16:16:42 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.377 16:16:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:57.377 [2024-07-15 16:16:42.709033] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:57.377 16:16:42 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.377 16:16:42 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:57.377 16:16:42 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:57.377 16:16:42 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.377 16:16:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:57.377 ************************************ 00:05:57.377 START TEST scheduler_create_thread 00:05:57.377 ************************************ 00:05:57.377 16:16:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:57.377 16:16:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:57.377 16:16:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.377 16:16:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.377 2 00:05:57.377 16:16:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.377 16:16:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:57.377 16:16:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.377 16:16:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.377 3 00:05:57.377 16:16:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.377 16:16:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:57.377 16:16:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.377 16:16:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.377 4 00:05:57.377 16:16:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.377 16:16:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:57.377 16:16:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.377 16:16:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.377 5 00:05:57.377 16:16:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.377 16:16:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:57.377 16:16:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.377 16:16:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.377 6 00:05:57.377 16:16:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.377 16:16:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:57.377 16:16:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.377 16:16:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.377 7 00:05:57.377 16:16:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.377 16:16:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:57.377 16:16:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.377 16:16:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.377 8 00:05:57.377 16:16:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.377 16:16:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:57.377 16:16:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.377 16:16:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.377 9 00:05:57.377 16:16:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.377 16:16:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:57.377 16:16:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.377 16:16:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.377 10 00:05:57.377 16:16:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.377 16:16:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:57.377 16:16:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.377 16:16:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.377 16:16:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.377 16:16:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:57.377 16:16:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:57.377 16:16:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.377 16:16:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.377 16:16:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.377 16:16:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:57.377 16:16:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.377 16:16:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.750 16:16:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.750 16:16:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:58.750 16:16:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:58.750 16:16:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.750 16:16:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:00.123 16:16:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:00.123 00:06:00.123 real 0m2.619s 00:06:00.123 user 0m0.027s 00:06:00.123 sys 0m0.004s 00:06:00.123 16:16:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.123 16:16:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:00.123 ************************************ 00:06:00.123 END TEST scheduler_create_thread 00:06:00.123 ************************************ 00:06:00.123 16:16:45 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:06:00.123 16:16:45 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:00.123 16:16:45 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1502405 00:06:00.123 16:16:45 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 1502405 ']' 00:06:00.123 16:16:45 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 1502405 00:06:00.123 16:16:45 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:06:00.123 16:16:45 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:00.123 16:16:45 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1502405 00:06:00.123 16:16:45 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:00.123 16:16:45 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:00.123 16:16:45 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1502405' 00:06:00.123 killing process with pid 1502405 00:06:00.123 16:16:45 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 1502405 00:06:00.123 16:16:45 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 1502405 00:06:00.381 [2024-07-15 16:16:45.847454] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:00.641 00:06:00.641 real 0m4.396s 00:06:00.641 user 0m8.308s 00:06:00.641 sys 0m0.448s 00:06:00.641 16:16:46 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.641 16:16:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:00.641 ************************************ 00:06:00.641 END TEST event_scheduler 00:06:00.641 ************************************ 00:06:00.641 16:16:46 event -- common/autotest_common.sh@1142 -- # return 0 00:06:00.641 16:16:46 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:00.641 16:16:46 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:00.641 16:16:46 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:00.641 16:16:46 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.641 16:16:46 event -- common/autotest_common.sh@10 -- # set +x 00:06:00.641 ************************************ 00:06:00.641 START TEST app_repeat 00:06:00.641 ************************************ 00:06:00.641 16:16:46 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:06:00.641 16:16:46 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.641 16:16:46 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.641 16:16:46 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:00.641 16:16:46 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:00.641 16:16:46 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:00.641 16:16:46 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:00.641 16:16:46 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:00.641 16:16:46 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1502993 00:06:00.641 16:16:46 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:00.641 16:16:46 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:00.641 16:16:46 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1502993' 00:06:00.641 Process app_repeat pid: 1502993 00:06:00.641 16:16:46 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:00.641 16:16:46 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:00.641 spdk_app_start Round 0 00:06:00.641 16:16:46 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1502993 /var/tmp/spdk-nbd.sock 00:06:00.641 16:16:46 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1502993 ']' 00:06:00.641 16:16:46 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:00.641 16:16:46 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:00.641 16:16:46 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:00.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:00.641 16:16:46 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:00.641 16:16:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:00.641 [2024-07-15 16:16:46.182034] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:06:00.641 [2024-07-15 16:16:46.182122] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1502993 ] 00:06:00.930 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.930 [2024-07-15 16:16:46.260091] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:00.930 [2024-07-15 16:16:46.347545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.930 [2024-07-15 16:16:46.347549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.544 16:16:47 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:01.544 16:16:47 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:01.544 16:16:47 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:01.802 Malloc0 00:06:01.802 16:16:47 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:01.802 Malloc1 00:06:02.070 16:16:47 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:02.070 16:16:47 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.070 16:16:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:02.070 16:16:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:02.070 16:16:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.070 16:16:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:02.070 16:16:47 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:02.070 16:16:47 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.070 16:16:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:02.070 16:16:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:02.070 16:16:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.070 16:16:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:02.070 16:16:47 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:02.070 16:16:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:02.070 16:16:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:02.070 16:16:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:02.070 /dev/nbd0 00:06:02.070 16:16:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:02.070 16:16:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:02.070 16:16:47 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:02.070 16:16:47 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:02.070 16:16:47 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:02.070 16:16:47 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:02.070 16:16:47 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:02.070 16:16:47 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:02.070 16:16:47 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:02.070 16:16:47 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:02.070 16:16:47 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:02.070 1+0 records in 00:06:02.070 1+0 records out 00:06:02.070 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000222483 s, 18.4 MB/s 00:06:02.070 16:16:47 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:02.070 16:16:47 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:02.070 16:16:47 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:02.070 16:16:47 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:02.070 16:16:47 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:02.070 16:16:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:02.070 16:16:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:02.070 16:16:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:02.328 /dev/nbd1 00:06:02.328 16:16:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:02.328 16:16:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:02.328 16:16:47 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:02.328 16:16:47 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:02.328 16:16:47 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:02.328 16:16:47 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:02.328 16:16:47 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:02.328 16:16:47 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:02.328 16:16:47 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:02.328 16:16:47 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:02.329 16:16:47 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:02.329 1+0 records in 00:06:02.329 1+0 records out 00:06:02.329 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231358 s, 17.7 MB/s 00:06:02.329 16:16:47 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:02.329 16:16:47 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:02.329 16:16:47 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:02.329 16:16:47 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:02.329 16:16:47 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:02.329 16:16:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:02.329 16:16:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:02.329 16:16:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:02.329 16:16:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.329 16:16:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:02.665 16:16:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:02.665 { 00:06:02.665 "nbd_device": "/dev/nbd0", 00:06:02.665 "bdev_name": "Malloc0" 00:06:02.665 }, 00:06:02.665 { 00:06:02.665 "nbd_device": "/dev/nbd1", 00:06:02.665 "bdev_name": "Malloc1" 00:06:02.665 } 00:06:02.665 ]' 00:06:02.665 16:16:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:02.665 { 00:06:02.665 "nbd_device": "/dev/nbd0", 00:06:02.665 "bdev_name": "Malloc0" 00:06:02.665 }, 00:06:02.665 { 00:06:02.665 "nbd_device": "/dev/nbd1", 00:06:02.665 "bdev_name": "Malloc1" 00:06:02.665 } 00:06:02.665 ]' 00:06:02.665 16:16:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:02.665 16:16:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:02.665 /dev/nbd1' 00:06:02.665 16:16:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:02.665 /dev/nbd1' 00:06:02.665 16:16:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:02.665 16:16:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:02.665 16:16:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:02.665 16:16:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:02.665 16:16:48 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:02.665 16:16:48 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:02.665 16:16:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.665 16:16:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:02.665 16:16:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:02.665 16:16:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:02.665 16:16:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:02.665 16:16:48 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:02.665 256+0 records in 00:06:02.665 256+0 records out 00:06:02.665 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0113934 s, 92.0 MB/s 00:06:02.665 16:16:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:02.665 16:16:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:02.665 256+0 records in 00:06:02.665 256+0 records out 00:06:02.665 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0204561 s, 51.3 MB/s 00:06:02.665 16:16:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:02.665 16:16:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:02.665 256+0 records in 00:06:02.665 256+0 records out 00:06:02.665 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0223912 s, 46.8 MB/s 00:06:02.665 16:16:48 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:02.665 16:16:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.665 16:16:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:02.665 16:16:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:02.665 16:16:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:02.665 16:16:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:02.665 16:16:48 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:02.665 16:16:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:02.665 16:16:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:02.665 16:16:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:02.665 16:16:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:02.665 16:16:48 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:02.666 16:16:48 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:02.666 16:16:48 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.666 16:16:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.666 16:16:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:02.666 16:16:48 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:02.666 16:16:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:02.666 16:16:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:02.923 16:16:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:02.923 16:16:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:02.923 16:16:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:02.923 16:16:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:02.923 16:16:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:02.923 16:16:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:02.923 16:16:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:02.923 16:16:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:02.923 16:16:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:02.923 16:16:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:03.181 16:16:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:03.181 16:16:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:03.181 16:16:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:03.181 16:16:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:03.181 16:16:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:03.181 16:16:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:03.181 16:16:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:03.181 16:16:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:03.181 16:16:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:03.181 16:16:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.181 16:16:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:03.181 16:16:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:03.181 16:16:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:03.181 16:16:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:03.439 16:16:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:03.439 16:16:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:03.439 16:16:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:03.439 16:16:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:03.439 16:16:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:03.439 16:16:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:03.439 16:16:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:03.439 16:16:48 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:03.439 16:16:48 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:03.439 16:16:48 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:03.439 16:16:48 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:03.697 [2024-07-15 16:16:49.164621] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:03.697 [2024-07-15 16:16:49.245653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.697 [2024-07-15 16:16:49.245655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.955 [2024-07-15 16:16:49.293095] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:03.955 [2024-07-15 16:16:49.293142] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:06.488 16:16:51 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:06.488 16:16:51 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:06.488 spdk_app_start Round 1 00:06:06.488 16:16:51 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1502993 /var/tmp/spdk-nbd.sock 00:06:06.488 16:16:51 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1502993 ']' 00:06:06.488 16:16:51 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:06.488 16:16:51 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:06.488 16:16:51 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:06.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:06.488 16:16:51 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:06.488 16:16:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:06.747 16:16:52 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:06.747 16:16:52 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:06.747 16:16:52 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:06.747 Malloc0 00:06:07.006 16:16:52 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:07.006 Malloc1 00:06:07.006 16:16:52 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:07.006 16:16:52 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.006 16:16:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:07.006 16:16:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:07.006 16:16:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.006 16:16:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:07.006 16:16:52 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:07.006 16:16:52 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.006 16:16:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:07.006 16:16:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:07.006 16:16:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.006 16:16:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:07.006 16:16:52 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:07.006 16:16:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:07.006 16:16:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:07.006 16:16:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:07.265 /dev/nbd0 00:06:07.265 16:16:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:07.265 16:16:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:07.265 16:16:52 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:07.265 16:16:52 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:07.265 16:16:52 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:07.265 16:16:52 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:07.265 16:16:52 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:07.265 16:16:52 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:07.265 16:16:52 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:07.265 16:16:52 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:07.265 16:16:52 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:07.265 1+0 records in 00:06:07.265 1+0 records out 00:06:07.265 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000249375 s, 16.4 MB/s 00:06:07.265 16:16:52 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:07.265 16:16:52 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:07.265 16:16:52 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:07.265 16:16:52 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:07.265 16:16:52 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:07.265 16:16:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:07.265 16:16:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:07.265 16:16:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:07.523 /dev/nbd1 00:06:07.524 16:16:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:07.524 16:16:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:07.524 16:16:52 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:07.524 16:16:52 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:07.524 16:16:52 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:07.524 16:16:52 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:07.524 16:16:52 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:07.524 16:16:52 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:07.524 16:16:52 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:07.524 16:16:52 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:07.524 16:16:52 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:07.524 1+0 records in 00:06:07.524 1+0 records out 00:06:07.524 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253483 s, 16.2 MB/s 00:06:07.524 16:16:52 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:07.524 16:16:52 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:07.524 16:16:52 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:07.524 16:16:52 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:07.524 16:16:52 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:07.524 16:16:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:07.524 16:16:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:07.524 16:16:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:07.524 16:16:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.524 16:16:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:07.783 16:16:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:07.783 { 00:06:07.783 "nbd_device": "/dev/nbd0", 00:06:07.783 "bdev_name": "Malloc0" 00:06:07.783 }, 00:06:07.783 { 00:06:07.783 "nbd_device": "/dev/nbd1", 00:06:07.783 "bdev_name": "Malloc1" 00:06:07.783 } 00:06:07.783 ]' 00:06:07.783 16:16:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:07.783 16:16:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:07.783 { 00:06:07.783 "nbd_device": "/dev/nbd0", 00:06:07.783 "bdev_name": "Malloc0" 00:06:07.783 }, 00:06:07.783 { 00:06:07.783 "nbd_device": "/dev/nbd1", 00:06:07.783 "bdev_name": "Malloc1" 00:06:07.783 } 00:06:07.783 ]' 00:06:07.783 16:16:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:07.783 /dev/nbd1' 00:06:07.783 16:16:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:07.783 16:16:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:07.783 /dev/nbd1' 00:06:07.783 16:16:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:07.783 16:16:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:07.783 16:16:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:07.783 16:16:53 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:07.783 16:16:53 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:07.783 16:16:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.783 16:16:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:07.783 16:16:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:07.783 16:16:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:07.783 16:16:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:07.783 16:16:53 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:07.783 256+0 records in 00:06:07.783 256+0 records out 00:06:07.783 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.011048 s, 94.9 MB/s 00:06:07.783 16:16:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:07.783 16:16:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:07.783 256+0 records in 00:06:07.783 256+0 records out 00:06:07.783 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0206267 s, 50.8 MB/s 00:06:07.783 16:16:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:07.783 16:16:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:07.783 256+0 records in 00:06:07.783 256+0 records out 00:06:07.783 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0222583 s, 47.1 MB/s 00:06:07.783 16:16:53 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:07.783 16:16:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.783 16:16:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:07.783 16:16:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:07.783 16:16:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:07.783 16:16:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:07.783 16:16:53 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:07.783 16:16:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:07.783 16:16:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:07.783 16:16:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:07.783 16:16:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:07.783 16:16:53 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:07.783 16:16:53 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:07.783 16:16:53 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.783 16:16:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.783 16:16:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:07.783 16:16:53 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:07.783 16:16:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:07.783 16:16:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:08.041 16:16:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:08.041 16:16:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:08.041 16:16:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:08.041 16:16:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:08.041 16:16:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:08.041 16:16:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:08.041 16:16:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:08.041 16:16:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:08.041 16:16:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:08.041 16:16:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:08.300 16:16:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:08.300 16:16:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:08.300 16:16:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:08.300 16:16:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:08.300 16:16:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:08.300 16:16:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:08.300 16:16:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:08.300 16:16:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:08.300 16:16:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:08.300 16:16:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.300 16:16:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:08.300 16:16:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:08.300 16:16:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:08.300 16:16:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:08.300 16:16:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:08.300 16:16:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:08.300 16:16:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:08.300 16:16:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:08.300 16:16:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:08.300 16:16:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:08.300 16:16:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:08.300 16:16:53 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:08.300 16:16:53 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:08.300 16:16:53 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:08.559 16:16:54 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:08.818 [2024-07-15 16:16:54.256823] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:08.818 [2024-07-15 16:16:54.341327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.818 [2024-07-15 16:16:54.341328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.818 [2024-07-15 16:16:54.389572] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:08.818 [2024-07-15 16:16:54.389620] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:12.108 16:16:57 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:12.108 16:16:57 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:12.108 spdk_app_start Round 2 00:06:12.108 16:16:57 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1502993 /var/tmp/spdk-nbd.sock 00:06:12.108 16:16:57 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1502993 ']' 00:06:12.108 16:16:57 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:12.108 16:16:57 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:12.108 16:16:57 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:12.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:12.108 16:16:57 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:12.108 16:16:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:12.108 16:16:57 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:12.108 16:16:57 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:12.108 16:16:57 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:12.108 Malloc0 00:06:12.108 16:16:57 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:12.108 Malloc1 00:06:12.108 16:16:57 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:12.108 16:16:57 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.108 16:16:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:12.108 16:16:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:12.108 16:16:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.108 16:16:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:12.109 16:16:57 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:12.109 16:16:57 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.109 16:16:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:12.109 16:16:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:12.109 16:16:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.109 16:16:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:12.109 16:16:57 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:12.109 16:16:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:12.109 16:16:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.109 16:16:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:12.368 /dev/nbd0 00:06:12.368 16:16:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:12.368 16:16:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:12.368 16:16:57 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:12.368 16:16:57 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:12.368 16:16:57 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:12.368 16:16:57 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:12.368 16:16:57 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:12.368 16:16:57 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:12.368 16:16:57 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:12.368 16:16:57 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:12.368 16:16:57 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:12.368 1+0 records in 00:06:12.368 1+0 records out 00:06:12.368 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0002239 s, 18.3 MB/s 00:06:12.368 16:16:57 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:12.368 16:16:57 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:12.368 16:16:57 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:12.368 16:16:57 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:12.368 16:16:57 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:12.368 16:16:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:12.368 16:16:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.368 16:16:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:12.627 /dev/nbd1 00:06:12.627 16:16:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:12.627 16:16:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:12.627 16:16:58 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:12.627 16:16:58 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:12.627 16:16:58 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:12.627 16:16:58 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:12.627 16:16:58 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:12.627 16:16:58 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:12.627 16:16:58 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:12.627 16:16:58 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:12.627 16:16:58 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:12.627 1+0 records in 00:06:12.627 1+0 records out 00:06:12.627 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0002516 s, 16.3 MB/s 00:06:12.627 16:16:58 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:12.627 16:16:58 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:12.627 16:16:58 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:12.627 16:16:58 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:12.627 16:16:58 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:12.627 16:16:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:12.627 16:16:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.627 16:16:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:12.627 16:16:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.627 16:16:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:12.887 16:16:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:12.887 { 00:06:12.887 "nbd_device": "/dev/nbd0", 00:06:12.887 "bdev_name": "Malloc0" 00:06:12.887 }, 00:06:12.887 { 00:06:12.887 "nbd_device": "/dev/nbd1", 00:06:12.887 "bdev_name": "Malloc1" 00:06:12.887 } 00:06:12.887 ]' 00:06:12.887 16:16:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:12.887 { 00:06:12.887 "nbd_device": "/dev/nbd0", 00:06:12.887 "bdev_name": "Malloc0" 00:06:12.887 }, 00:06:12.887 { 00:06:12.887 "nbd_device": "/dev/nbd1", 00:06:12.887 "bdev_name": "Malloc1" 00:06:12.887 } 00:06:12.887 ]' 00:06:12.887 16:16:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:12.887 16:16:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:12.887 /dev/nbd1' 00:06:12.887 16:16:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:12.887 /dev/nbd1' 00:06:12.887 16:16:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:12.887 16:16:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:12.887 16:16:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:12.887 16:16:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:12.887 16:16:58 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:12.887 16:16:58 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:12.887 16:16:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.887 16:16:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:12.887 16:16:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:12.887 16:16:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:12.887 16:16:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:12.887 16:16:58 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:12.887 256+0 records in 00:06:12.887 256+0 records out 00:06:12.887 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104269 s, 101 MB/s 00:06:12.887 16:16:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:12.887 16:16:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:12.887 256+0 records in 00:06:12.887 256+0 records out 00:06:12.887 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.019336 s, 54.2 MB/s 00:06:12.887 16:16:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:12.887 16:16:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:12.887 256+0 records in 00:06:12.887 256+0 records out 00:06:12.887 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.022017 s, 47.6 MB/s 00:06:12.887 16:16:58 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:12.887 16:16:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.887 16:16:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:12.887 16:16:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:12.887 16:16:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:12.887 16:16:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:12.887 16:16:58 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:12.887 16:16:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:12.887 16:16:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:12.887 16:16:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:12.887 16:16:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:12.887 16:16:58 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:12.887 16:16:58 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:12.887 16:16:58 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.887 16:16:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.887 16:16:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:12.887 16:16:58 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:12.887 16:16:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:12.887 16:16:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:13.147 16:16:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:13.147 16:16:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:13.147 16:16:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:13.147 16:16:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.147 16:16:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.147 16:16:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:13.147 16:16:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:13.147 16:16:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.147 16:16:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:13.147 16:16:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:13.406 16:16:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:13.406 16:16:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:13.406 16:16:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:13.406 16:16:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.406 16:16:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.406 16:16:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:13.406 16:16:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:13.406 16:16:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.406 16:16:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:13.406 16:16:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.406 16:16:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:13.406 16:16:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:13.406 16:16:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:13.406 16:16:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:13.665 16:16:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:13.665 16:16:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:13.665 16:16:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:13.665 16:16:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:13.665 16:16:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:13.665 16:16:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:13.665 16:16:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:13.665 16:16:58 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:13.665 16:16:58 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:13.665 16:16:58 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:13.665 16:16:59 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:13.924 [2024-07-15 16:16:59.385502] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:13.924 [2024-07-15 16:16:59.466267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.924 [2024-07-15 16:16:59.466269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.183 [2024-07-15 16:16:59.510594] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:14.183 [2024-07-15 16:16:59.510634] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:16.716 16:17:02 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1502993 /var/tmp/spdk-nbd.sock 00:06:16.716 16:17:02 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1502993 ']' 00:06:16.716 16:17:02 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:16.716 16:17:02 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:16.716 16:17:02 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:16.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:16.716 16:17:02 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:16.716 16:17:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:16.975 16:17:02 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:16.975 16:17:02 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:16.975 16:17:02 event.app_repeat -- event/event.sh@39 -- # killprocess 1502993 00:06:16.975 16:17:02 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 1502993 ']' 00:06:16.975 16:17:02 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 1502993 00:06:16.975 16:17:02 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:16.975 16:17:02 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:16.975 16:17:02 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1502993 00:06:16.975 16:17:02 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:16.975 16:17:02 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:16.975 16:17:02 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1502993' 00:06:16.975 killing process with pid 1502993 00:06:16.975 16:17:02 event.app_repeat -- common/autotest_common.sh@967 -- # kill 1502993 00:06:16.975 16:17:02 event.app_repeat -- common/autotest_common.sh@972 -- # wait 1502993 00:06:17.234 spdk_app_start is called in Round 0. 00:06:17.234 Shutdown signal received, stop current app iteration 00:06:17.234 Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 reinitialization... 00:06:17.234 spdk_app_start is called in Round 1. 00:06:17.234 Shutdown signal received, stop current app iteration 00:06:17.234 Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 reinitialization... 00:06:17.234 spdk_app_start is called in Round 2. 00:06:17.234 Shutdown signal received, stop current app iteration 00:06:17.234 Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 reinitialization... 00:06:17.234 spdk_app_start is called in Round 3. 00:06:17.234 Shutdown signal received, stop current app iteration 00:06:17.234 16:17:02 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:17.234 16:17:02 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:17.234 00:06:17.234 real 0m16.430s 00:06:17.234 user 0m34.625s 00:06:17.234 sys 0m3.299s 00:06:17.234 16:17:02 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.234 16:17:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:17.234 ************************************ 00:06:17.234 END TEST app_repeat 00:06:17.234 ************************************ 00:06:17.234 16:17:02 event -- common/autotest_common.sh@1142 -- # return 0 00:06:17.234 16:17:02 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:17.234 16:17:02 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:17.234 16:17:02 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:17.234 16:17:02 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.234 16:17:02 event -- common/autotest_common.sh@10 -- # set +x 00:06:17.235 ************************************ 00:06:17.235 START TEST cpu_locks 00:06:17.235 ************************************ 00:06:17.235 16:17:02 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:17.235 * Looking for test storage... 00:06:17.235 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:06:17.235 16:17:02 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:17.235 16:17:02 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:17.235 16:17:02 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:17.235 16:17:02 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:17.235 16:17:02 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:17.235 16:17:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.235 16:17:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:17.235 ************************************ 00:06:17.235 START TEST default_locks 00:06:17.235 ************************************ 00:06:17.235 16:17:02 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:06:17.235 16:17:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1505490 00:06:17.235 16:17:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1505490 00:06:17.235 16:17:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:17.235 16:17:02 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 1505490 ']' 00:06:17.235 16:17:02 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.235 16:17:02 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:17.235 16:17:02 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.235 16:17:02 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:17.235 16:17:02 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:17.494 [2024-07-15 16:17:02.825829] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:06:17.494 [2024-07-15 16:17:02.825915] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1505490 ] 00:06:17.494 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.494 [2024-07-15 16:17:02.901096] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.494 [2024-07-15 16:17:02.981257] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.430 16:17:03 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:18.430 16:17:03 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:06:18.430 16:17:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1505490 00:06:18.431 16:17:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1505490 00:06:18.431 16:17:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:18.690 lslocks: write error 00:06:18.690 16:17:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1505490 00:06:18.690 16:17:04 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 1505490 ']' 00:06:18.690 16:17:04 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 1505490 00:06:18.690 16:17:04 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:06:18.690 16:17:04 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:18.690 16:17:04 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1505490 00:06:18.690 16:17:04 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:18.690 16:17:04 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:18.690 16:17:04 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1505490' 00:06:18.690 killing process with pid 1505490 00:06:18.690 16:17:04 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 1505490 00:06:18.690 16:17:04 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 1505490 00:06:18.950 16:17:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1505490 00:06:18.950 16:17:04 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:18.950 16:17:04 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1505490 00:06:18.950 16:17:04 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:18.950 16:17:04 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:18.950 16:17:04 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:18.950 16:17:04 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:18.950 16:17:04 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 1505490 00:06:18.950 16:17:04 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 1505490 ']' 00:06:18.950 16:17:04 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.950 16:17:04 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:18.950 16:17:04 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.950 16:17:04 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:18.950 16:17:04 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.950 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1505490) - No such process 00:06:18.950 ERROR: process (pid: 1505490) is no longer running 00:06:18.950 16:17:04 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:18.950 16:17:04 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:18.950 16:17:04 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:18.950 16:17:04 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:18.950 16:17:04 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:18.950 16:17:04 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:18.950 16:17:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:18.950 16:17:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:18.950 16:17:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:18.950 16:17:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:18.950 00:06:18.950 real 0m1.599s 00:06:18.950 user 0m1.658s 00:06:18.950 sys 0m0.586s 00:06:18.950 16:17:04 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.950 16:17:04 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.950 ************************************ 00:06:18.950 END TEST default_locks 00:06:18.950 ************************************ 00:06:18.950 16:17:04 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:18.950 16:17:04 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:18.950 16:17:04 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:18.950 16:17:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.950 16:17:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.950 ************************************ 00:06:18.950 START TEST default_locks_via_rpc 00:06:18.950 ************************************ 00:06:18.950 16:17:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:18.950 16:17:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1505702 00:06:18.950 16:17:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1505702 00:06:18.950 16:17:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1505702 ']' 00:06:18.950 16:17:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.950 16:17:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:18.950 16:17:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.950 16:17:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:18.950 16:17:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:18.950 16:17:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.950 [2024-07-15 16:17:04.496497] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:06:18.950 [2024-07-15 16:17:04.496571] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1505702 ] 00:06:19.210 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.210 [2024-07-15 16:17:04.571772] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.210 [2024-07-15 16:17:04.663253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.779 16:17:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:19.779 16:17:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:19.779 16:17:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:19.779 16:17:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:19.779 16:17:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.779 16:17:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:19.779 16:17:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:19.779 16:17:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:19.779 16:17:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:19.779 16:17:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:19.779 16:17:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:19.779 16:17:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:19.779 16:17:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.780 16:17:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:19.780 16:17:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1505702 00:06:19.780 16:17:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1505702 00:06:19.780 16:17:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:20.348 16:17:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1505702 00:06:20.348 16:17:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 1505702 ']' 00:06:20.348 16:17:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 1505702 00:06:20.348 16:17:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:20.348 16:17:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:20.348 16:17:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1505702 00:06:20.348 16:17:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:20.348 16:17:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:20.348 16:17:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1505702' 00:06:20.348 killing process with pid 1505702 00:06:20.348 16:17:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 1505702 00:06:20.348 16:17:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 1505702 00:06:20.607 00:06:20.607 real 0m1.709s 00:06:20.607 user 0m1.766s 00:06:20.607 sys 0m0.598s 00:06:20.607 16:17:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.607 16:17:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.607 ************************************ 00:06:20.607 END TEST default_locks_via_rpc 00:06:20.607 ************************************ 00:06:20.865 16:17:06 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:20.865 16:17:06 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:20.865 16:17:06 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:20.865 16:17:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.866 16:17:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:20.866 ************************************ 00:06:20.866 START TEST non_locking_app_on_locked_coremask 00:06:20.866 ************************************ 00:06:20.866 16:17:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:20.866 16:17:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1505915 00:06:20.866 16:17:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1505915 /var/tmp/spdk.sock 00:06:20.866 16:17:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:20.866 16:17:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1505915 ']' 00:06:20.866 16:17:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.866 16:17:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:20.866 16:17:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.866 16:17:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:20.866 16:17:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.866 [2024-07-15 16:17:06.287325] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:06:20.866 [2024-07-15 16:17:06.287393] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1505915 ] 00:06:20.866 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.866 [2024-07-15 16:17:06.363846] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.124 [2024-07-15 16:17:06.451355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.689 16:17:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:21.689 16:17:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:21.689 16:17:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:21.689 16:17:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1506088 00:06:21.689 16:17:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1506088 /var/tmp/spdk2.sock 00:06:21.689 16:17:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1506088 ']' 00:06:21.689 16:17:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:21.689 16:17:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:21.689 16:17:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:21.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:21.689 16:17:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:21.689 16:17:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.689 [2024-07-15 16:17:07.145270] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:06:21.689 [2024-07-15 16:17:07.145336] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1506088 ] 00:06:21.689 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.689 [2024-07-15 16:17:07.244430] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:21.689 [2024-07-15 16:17:07.244462] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.946 [2024-07-15 16:17:07.411864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.511 16:17:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:22.511 16:17:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:22.511 16:17:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1505915 00:06:22.511 16:17:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1505915 00:06:22.511 16:17:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:23.884 lslocks: write error 00:06:23.884 16:17:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1505915 00:06:23.884 16:17:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1505915 ']' 00:06:23.884 16:17:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1505915 00:06:23.884 16:17:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:23.884 16:17:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:23.884 16:17:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1505915 00:06:23.885 16:17:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:23.885 16:17:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:23.885 16:17:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1505915' 00:06:23.885 killing process with pid 1505915 00:06:23.885 16:17:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1505915 00:06:23.885 16:17:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1505915 00:06:24.451 16:17:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1506088 00:06:24.451 16:17:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1506088 ']' 00:06:24.451 16:17:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1506088 00:06:24.451 16:17:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:24.451 16:17:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:24.451 16:17:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1506088 00:06:24.451 16:17:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:24.451 16:17:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:24.451 16:17:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1506088' 00:06:24.451 killing process with pid 1506088 00:06:24.451 16:17:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1506088 00:06:24.451 16:17:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1506088 00:06:24.710 00:06:24.710 real 0m3.974s 00:06:24.710 user 0m4.189s 00:06:24.710 sys 0m1.423s 00:06:24.710 16:17:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.710 16:17:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:24.710 ************************************ 00:06:24.710 END TEST non_locking_app_on_locked_coremask 00:06:24.710 ************************************ 00:06:24.710 16:17:10 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:24.710 16:17:10 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:24.710 16:17:10 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:24.710 16:17:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.710 16:17:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:24.968 ************************************ 00:06:24.968 START TEST locking_app_on_unlocked_coremask 00:06:24.968 ************************************ 00:06:24.968 16:17:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:24.968 16:17:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1506481 00:06:24.968 16:17:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1506481 /var/tmp/spdk.sock 00:06:24.968 16:17:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1506481 ']' 00:06:24.968 16:17:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.968 16:17:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:24.968 16:17:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.968 16:17:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:24.968 16:17:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:24.968 16:17:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:24.968 [2024-07-15 16:17:10.333909] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:06:24.968 [2024-07-15 16:17:10.333997] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1506481 ] 00:06:24.968 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.968 [2024-07-15 16:17:10.408559] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:24.968 [2024-07-15 16:17:10.408588] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.968 [2024-07-15 16:17:10.496857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.904 16:17:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:25.904 16:17:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:25.904 16:17:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1506655 00:06:25.904 16:17:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1506655 /var/tmp/spdk2.sock 00:06:25.904 16:17:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1506655 ']' 00:06:25.904 16:17:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:25.904 16:17:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:25.904 16:17:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:25.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:25.904 16:17:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:25.904 16:17:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:25.904 16:17:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:25.904 [2024-07-15 16:17:11.184411] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:06:25.904 [2024-07-15 16:17:11.184483] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1506655 ] 00:06:25.904 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.904 [2024-07-15 16:17:11.283316] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.904 [2024-07-15 16:17:11.451293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.471 16:17:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:26.471 16:17:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:26.471 16:17:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1506655 00:06:26.471 16:17:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1506655 00:06:26.471 16:17:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:27.406 lslocks: write error 00:06:27.406 16:17:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1506481 00:06:27.406 16:17:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1506481 ']' 00:06:27.406 16:17:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 1506481 00:06:27.406 16:17:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:27.406 16:17:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:27.406 16:17:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1506481 00:06:27.406 16:17:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:27.406 16:17:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:27.406 16:17:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1506481' 00:06:27.406 killing process with pid 1506481 00:06:27.406 16:17:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 1506481 00:06:27.406 16:17:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 1506481 00:06:27.974 16:17:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1506655 00:06:27.974 16:17:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1506655 ']' 00:06:27.974 16:17:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 1506655 00:06:27.974 16:17:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:27.974 16:17:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:27.974 16:17:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1506655 00:06:27.974 16:17:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:27.974 16:17:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:27.974 16:17:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1506655' 00:06:27.974 killing process with pid 1506655 00:06:27.974 16:17:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 1506655 00:06:27.974 16:17:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 1506655 00:06:28.542 00:06:28.542 real 0m3.579s 00:06:28.542 user 0m3.779s 00:06:28.542 sys 0m1.206s 00:06:28.542 16:17:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.542 16:17:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.542 ************************************ 00:06:28.542 END TEST locking_app_on_unlocked_coremask 00:06:28.542 ************************************ 00:06:28.542 16:17:13 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:28.542 16:17:13 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:28.542 16:17:13 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:28.542 16:17:13 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.542 16:17:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.542 ************************************ 00:06:28.542 START TEST locking_app_on_locked_coremask 00:06:28.542 ************************************ 00:06:28.542 16:17:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:06:28.542 16:17:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1507046 00:06:28.542 16:17:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1507046 /var/tmp/spdk.sock 00:06:28.543 16:17:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1507046 ']' 00:06:28.543 16:17:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.543 16:17:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:28.543 16:17:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.543 16:17:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:28.543 16:17:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.543 16:17:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:28.543 [2024-07-15 16:17:13.987053] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:06:28.543 [2024-07-15 16:17:13.987113] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1507046 ] 00:06:28.543 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.543 [2024-07-15 16:17:14.059299] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.801 [2024-07-15 16:17:14.148098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.369 16:17:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:29.369 16:17:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:29.369 16:17:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1507220 00:06:29.369 16:17:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1507220 /var/tmp/spdk2.sock 00:06:29.369 16:17:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:29.369 16:17:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1507220 /var/tmp/spdk2.sock 00:06:29.369 16:17:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:29.369 16:17:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:29.369 16:17:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:29.369 16:17:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:29.369 16:17:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:29.369 16:17:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1507220 /var/tmp/spdk2.sock 00:06:29.369 16:17:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1507220 ']' 00:06:29.369 16:17:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:29.369 16:17:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:29.369 16:17:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:29.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:29.369 16:17:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:29.369 16:17:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.369 [2024-07-15 16:17:14.831024] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:06:29.369 [2024-07-15 16:17:14.831114] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1507220 ] 00:06:29.369 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.369 [2024-07-15 16:17:14.929062] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1507046 has claimed it. 00:06:29.369 [2024-07-15 16:17:14.929094] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:29.937 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1507220) - No such process 00:06:29.937 ERROR: process (pid: 1507220) is no longer running 00:06:29.937 16:17:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:29.937 16:17:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:29.937 16:17:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:29.937 16:17:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:29.937 16:17:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:29.937 16:17:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:29.937 16:17:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1507046 00:06:29.937 16:17:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1507046 00:06:29.937 16:17:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:30.504 lslocks: write error 00:06:30.504 16:17:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1507046 00:06:30.504 16:17:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1507046 ']' 00:06:30.504 16:17:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1507046 00:06:30.504 16:17:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:30.764 16:17:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:30.764 16:17:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1507046 00:06:30.764 16:17:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:30.764 16:17:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:30.764 16:17:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1507046' 00:06:30.764 killing process with pid 1507046 00:06:30.764 16:17:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1507046 00:06:30.764 16:17:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1507046 00:06:31.024 00:06:31.024 real 0m2.497s 00:06:31.024 user 0m2.705s 00:06:31.024 sys 0m0.809s 00:06:31.024 16:17:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.024 16:17:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:31.024 ************************************ 00:06:31.024 END TEST locking_app_on_locked_coremask 00:06:31.024 ************************************ 00:06:31.024 16:17:16 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:31.024 16:17:16 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:31.024 16:17:16 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:31.024 16:17:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.024 16:17:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:31.024 ************************************ 00:06:31.024 START TEST locking_overlapped_coremask 00:06:31.024 ************************************ 00:06:31.024 16:17:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:06:31.024 16:17:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1507427 00:06:31.024 16:17:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1507427 /var/tmp/spdk.sock 00:06:31.024 16:17:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 1507427 ']' 00:06:31.024 16:17:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.024 16:17:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:31.024 16:17:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.024 16:17:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:31.024 16:17:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:31.024 16:17:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:31.024 [2024-07-15 16:17:16.558633] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:06:31.024 [2024-07-15 16:17:16.558718] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1507427 ] 00:06:31.024 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.283 [2024-07-15 16:17:16.634215] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:31.283 [2024-07-15 16:17:16.726786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.283 [2024-07-15 16:17:16.726871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.283 [2024-07-15 16:17:16.726874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.850 16:17:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:31.850 16:17:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:31.850 16:17:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1507586 00:06:31.850 16:17:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1507586 /var/tmp/spdk2.sock 00:06:31.850 16:17:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:31.850 16:17:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:31.850 16:17:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1507586 /var/tmp/spdk2.sock 00:06:31.850 16:17:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:31.850 16:17:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:31.850 16:17:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:31.850 16:17:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:31.850 16:17:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1507586 /var/tmp/spdk2.sock 00:06:31.850 16:17:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 1507586 ']' 00:06:31.850 16:17:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:31.850 16:17:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:31.850 16:17:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:31.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:31.850 16:17:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:31.850 16:17:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:31.850 [2024-07-15 16:17:17.415133] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:06:31.850 [2024-07-15 16:17:17.415201] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1507586 ] 00:06:32.109 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.109 [2024-07-15 16:17:17.516663] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1507427 has claimed it. 00:06:32.109 [2024-07-15 16:17:17.516700] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:32.677 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1507586) - No such process 00:06:32.677 ERROR: process (pid: 1507586) is no longer running 00:06:32.677 16:17:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:32.677 16:17:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:32.677 16:17:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:32.677 16:17:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:32.677 16:17:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:32.677 16:17:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:32.677 16:17:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:32.677 16:17:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:32.677 16:17:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:32.677 16:17:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:32.677 16:17:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1507427 00:06:32.677 16:17:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 1507427 ']' 00:06:32.678 16:17:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 1507427 00:06:32.678 16:17:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:06:32.678 16:17:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:32.678 16:17:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1507427 00:06:32.678 16:17:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:32.678 16:17:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:32.678 16:17:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1507427' 00:06:32.678 killing process with pid 1507427 00:06:32.678 16:17:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 1507427 00:06:32.678 16:17:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 1507427 00:06:32.937 00:06:32.937 real 0m1.940s 00:06:32.937 user 0m5.371s 00:06:32.937 sys 0m0.504s 00:06:32.937 16:17:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.937 16:17:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:32.937 ************************************ 00:06:32.937 END TEST locking_overlapped_coremask 00:06:32.937 ************************************ 00:06:33.197 16:17:18 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:33.197 16:17:18 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:33.197 16:17:18 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:33.197 16:17:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.197 16:17:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:33.197 ************************************ 00:06:33.197 START TEST locking_overlapped_coremask_via_rpc 00:06:33.197 ************************************ 00:06:33.197 16:17:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:06:33.197 16:17:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1507690 00:06:33.197 16:17:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1507690 /var/tmp/spdk.sock 00:06:33.197 16:17:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:33.197 16:17:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1507690 ']' 00:06:33.197 16:17:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.197 16:17:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:33.197 16:17:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.197 16:17:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:33.197 16:17:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.197 [2024-07-15 16:17:18.566874] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:06:33.197 [2024-07-15 16:17:18.566934] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1507690 ] 00:06:33.197 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.197 [2024-07-15 16:17:18.640678] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:33.197 [2024-07-15 16:17:18.640709] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:33.197 [2024-07-15 16:17:18.733486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.197 [2024-07-15 16:17:18.733585] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:33.197 [2024-07-15 16:17:18.733588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.135 16:17:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:34.135 16:17:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:34.135 16:17:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1507826 00:06:34.135 16:17:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1507826 /var/tmp/spdk2.sock 00:06:34.135 16:17:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:34.135 16:17:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1507826 ']' 00:06:34.135 16:17:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:34.135 16:17:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:34.135 16:17:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:34.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:34.135 16:17:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:34.135 16:17:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.135 [2024-07-15 16:17:19.421259] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:06:34.135 [2024-07-15 16:17:19.421327] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1507826 ] 00:06:34.135 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.135 [2024-07-15 16:17:19.523545] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:34.135 [2024-07-15 16:17:19.523575] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:34.135 [2024-07-15 16:17:19.686608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:34.135 [2024-07-15 16:17:19.690575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:34.135 [2024-07-15 16:17:19.690576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:34.704 16:17:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:34.704 16:17:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:34.704 16:17:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:34.704 16:17:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.704 16:17:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.704 16:17:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.704 16:17:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:34.704 16:17:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:34.704 16:17:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:34.704 16:17:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:34.704 16:17:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.704 16:17:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:34.704 16:17:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.704 16:17:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:34.704 16:17:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.704 16:17:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.704 [2024-07-15 16:17:20.261610] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1507690 has claimed it. 00:06:34.704 request: 00:06:34.704 { 00:06:34.704 "method": "framework_enable_cpumask_locks", 00:06:34.704 "req_id": 1 00:06:34.704 } 00:06:34.704 Got JSON-RPC error response 00:06:34.704 response: 00:06:34.704 { 00:06:34.704 "code": -32603, 00:06:34.704 "message": "Failed to claim CPU core: 2" 00:06:34.704 } 00:06:34.704 16:17:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:34.704 16:17:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:34.704 16:17:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:34.704 16:17:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:34.704 16:17:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:34.704 16:17:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1507690 /var/tmp/spdk.sock 00:06:34.704 16:17:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1507690 ']' 00:06:34.704 16:17:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.704 16:17:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:34.705 16:17:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.705 16:17:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:34.705 16:17:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.964 16:17:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:34.964 16:17:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:34.964 16:17:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1507826 /var/tmp/spdk2.sock 00:06:34.964 16:17:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1507826 ']' 00:06:34.964 16:17:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:34.964 16:17:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:34.964 16:17:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:34.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:34.964 16:17:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:34.964 16:17:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.223 16:17:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:35.223 16:17:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:35.223 16:17:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:35.223 16:17:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:35.223 16:17:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:35.223 16:17:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:35.223 00:06:35.223 real 0m2.114s 00:06:35.223 user 0m0.822s 00:06:35.223 sys 0m0.223s 00:06:35.223 16:17:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.223 16:17:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.223 ************************************ 00:06:35.223 END TEST locking_overlapped_coremask_via_rpc 00:06:35.223 ************************************ 00:06:35.223 16:17:20 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:35.223 16:17:20 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:35.223 16:17:20 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1507690 ]] 00:06:35.223 16:17:20 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1507690 00:06:35.223 16:17:20 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1507690 ']' 00:06:35.223 16:17:20 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1507690 00:06:35.223 16:17:20 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:35.223 16:17:20 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:35.223 16:17:20 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1507690 00:06:35.223 16:17:20 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:35.223 16:17:20 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:35.223 16:17:20 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1507690' 00:06:35.223 killing process with pid 1507690 00:06:35.223 16:17:20 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 1507690 00:06:35.223 16:17:20 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 1507690 00:06:35.791 16:17:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1507826 ]] 00:06:35.791 16:17:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1507826 00:06:35.791 16:17:21 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1507826 ']' 00:06:35.791 16:17:21 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1507826 00:06:35.791 16:17:21 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:35.791 16:17:21 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:35.791 16:17:21 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1507826 00:06:35.791 16:17:21 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:35.791 16:17:21 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:35.791 16:17:21 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1507826' 00:06:35.791 killing process with pid 1507826 00:06:35.791 16:17:21 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 1507826 00:06:35.791 16:17:21 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 1507826 00:06:36.051 16:17:21 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:36.051 16:17:21 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:36.051 16:17:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1507690 ]] 00:06:36.051 16:17:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1507690 00:06:36.051 16:17:21 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1507690 ']' 00:06:36.051 16:17:21 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1507690 00:06:36.051 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1507690) - No such process 00:06:36.051 16:17:21 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 1507690 is not found' 00:06:36.051 Process with pid 1507690 is not found 00:06:36.051 16:17:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1507826 ]] 00:06:36.051 16:17:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1507826 00:06:36.051 16:17:21 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1507826 ']' 00:06:36.051 16:17:21 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1507826 00:06:36.051 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1507826) - No such process 00:06:36.051 16:17:21 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 1507826 is not found' 00:06:36.051 Process with pid 1507826 is not found 00:06:36.051 16:17:21 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:36.051 00:06:36.051 real 0m18.855s 00:06:36.051 user 0m31.075s 00:06:36.051 sys 0m6.402s 00:06:36.051 16:17:21 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.051 16:17:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:36.051 ************************************ 00:06:36.051 END TEST cpu_locks 00:06:36.051 ************************************ 00:06:36.051 16:17:21 event -- common/autotest_common.sh@1142 -- # return 0 00:06:36.051 00:06:36.051 real 0m43.993s 00:06:36.051 user 1m20.641s 00:06:36.051 sys 0m10.859s 00:06:36.051 16:17:21 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.051 16:17:21 event -- common/autotest_common.sh@10 -- # set +x 00:06:36.051 ************************************ 00:06:36.051 END TEST event 00:06:36.051 ************************************ 00:06:36.051 16:17:21 -- common/autotest_common.sh@1142 -- # return 0 00:06:36.051 16:17:21 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:06:36.051 16:17:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:36.051 16:17:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.051 16:17:21 -- common/autotest_common.sh@10 -- # set +x 00:06:36.310 ************************************ 00:06:36.310 START TEST thread 00:06:36.310 ************************************ 00:06:36.310 16:17:21 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:06:36.310 * Looking for test storage... 00:06:36.310 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread 00:06:36.310 16:17:21 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:36.310 16:17:21 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:36.310 16:17:21 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.310 16:17:21 thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.310 ************************************ 00:06:36.310 START TEST thread_poller_perf 00:06:36.310 ************************************ 00:06:36.310 16:17:21 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:36.310 [2024-07-15 16:17:21.786615] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:06:36.310 [2024-07-15 16:17:21.786699] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1508269 ] 00:06:36.310 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.310 [2024-07-15 16:17:21.863985] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.569 [2024-07-15 16:17:21.945213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.569 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:37.507 ====================================== 00:06:37.507 busy:2303007540 (cyc) 00:06:37.507 total_run_count: 842000 00:06:37.507 tsc_hz: 2300000000 (cyc) 00:06:37.507 ====================================== 00:06:37.507 poller_cost: 2735 (cyc), 1189 (nsec) 00:06:37.507 00:06:37.507 real 0m1.255s 00:06:37.507 user 0m1.156s 00:06:37.507 sys 0m0.094s 00:06:37.507 16:17:23 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:37.507 16:17:23 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:37.507 ************************************ 00:06:37.507 END TEST thread_poller_perf 00:06:37.507 ************************************ 00:06:37.507 16:17:23 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:37.507 16:17:23 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:37.507 16:17:23 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:37.507 16:17:23 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.507 16:17:23 thread -- common/autotest_common.sh@10 -- # set +x 00:06:37.766 ************************************ 00:06:37.766 START TEST thread_poller_perf 00:06:37.766 ************************************ 00:06:37.766 16:17:23 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:37.766 [2024-07-15 16:17:23.124600] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:06:37.766 [2024-07-15 16:17:23.124687] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1508470 ] 00:06:37.766 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.766 [2024-07-15 16:17:23.204529] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.766 [2024-07-15 16:17:23.290011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.766 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:39.144 ====================================== 00:06:39.144 busy:2301336626 (cyc) 00:06:39.144 total_run_count: 14015000 00:06:39.144 tsc_hz: 2300000000 (cyc) 00:06:39.144 ====================================== 00:06:39.144 poller_cost: 164 (cyc), 71 (nsec) 00:06:39.144 00:06:39.144 real 0m1.260s 00:06:39.144 user 0m1.146s 00:06:39.144 sys 0m0.109s 00:06:39.144 16:17:24 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.144 16:17:24 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:39.144 ************************************ 00:06:39.144 END TEST thread_poller_perf 00:06:39.144 ************************************ 00:06:39.144 16:17:24 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:39.144 16:17:24 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:06:39.144 16:17:24 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:06:39.144 16:17:24 thread -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:39.144 16:17:24 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.144 16:17:24 thread -- common/autotest_common.sh@10 -- # set +x 00:06:39.144 ************************************ 00:06:39.144 START TEST thread_spdk_lock 00:06:39.144 ************************************ 00:06:39.144 16:17:24 thread.thread_spdk_lock -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:06:39.144 [2024-07-15 16:17:24.447198] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:06:39.144 [2024-07-15 16:17:24.447262] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1508665 ] 00:06:39.144 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.145 [2024-07-15 16:17:24.512876] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:39.145 [2024-07-15 16:17:24.596575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.145 [2024-07-15 16:17:24.596578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.714 [2024-07-15 16:17:25.085352] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 965:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:39.714 [2024-07-15 16:17:25.085385] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3083:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:06:39.714 [2024-07-15 16:17:25.085396] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x14ce200 00:06:39.714 [2024-07-15 16:17:25.086258] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 860:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:39.714 [2024-07-15 16:17:25.086361] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1026:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:39.714 [2024-07-15 16:17:25.086380] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 860:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:39.714 Starting test contend 00:06:39.714 Worker Delay Wait us Hold us Total us 00:06:39.714 0 3 180487 185110 365597 00:06:39.714 1 5 96047 285679 381727 00:06:39.714 PASS test contend 00:06:39.714 Starting test hold_by_poller 00:06:39.714 PASS test hold_by_poller 00:06:39.714 Starting test hold_by_message 00:06:39.714 PASS test hold_by_message 00:06:39.714 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock summary: 00:06:39.714 100014 assertions passed 00:06:39.714 0 assertions failed 00:06:39.714 00:06:39.714 real 0m0.716s 00:06:39.714 user 0m1.117s 00:06:39.714 sys 0m0.086s 00:06:39.714 16:17:25 thread.thread_spdk_lock -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.714 16:17:25 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:06:39.714 ************************************ 00:06:39.714 END TEST thread_spdk_lock 00:06:39.714 ************************************ 00:06:39.714 16:17:25 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:39.714 00:06:39.714 real 0m3.558s 00:06:39.714 user 0m3.543s 00:06:39.714 sys 0m0.515s 00:06:39.714 16:17:25 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.714 16:17:25 thread -- common/autotest_common.sh@10 -- # set +x 00:06:39.714 ************************************ 00:06:39.714 END TEST thread 00:06:39.714 ************************************ 00:06:39.714 16:17:25 -- common/autotest_common.sh@1142 -- # return 0 00:06:39.714 16:17:25 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel.sh 00:06:39.714 16:17:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:39.714 16:17:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.714 16:17:25 -- common/autotest_common.sh@10 -- # set +x 00:06:39.714 ************************************ 00:06:39.714 START TEST accel 00:06:39.714 ************************************ 00:06:39.714 16:17:25 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel.sh 00:06:39.974 * Looking for test storage... 00:06:39.974 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel 00:06:39.974 16:17:25 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:39.974 16:17:25 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:39.974 16:17:25 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:39.974 16:17:25 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=1508792 00:06:39.974 16:17:25 accel -- accel/accel.sh@63 -- # waitforlisten 1508792 00:06:39.974 16:17:25 accel -- common/autotest_common.sh@829 -- # '[' -z 1508792 ']' 00:06:39.974 16:17:25 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:39.974 16:17:25 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:39.974 16:17:25 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.974 16:17:25 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:39.974 16:17:25 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:39.974 16:17:25 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.974 16:17:25 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:39.974 16:17:25 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.974 16:17:25 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:39.974 16:17:25 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:39.974 16:17:25 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.974 16:17:25 accel -- accel/accel.sh@41 -- # jq -r . 00:06:39.974 16:17:25 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:39.974 16:17:25 accel -- common/autotest_common.sh@10 -- # set +x 00:06:39.974 [2024-07-15 16:17:25.391005] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:06:39.974 [2024-07-15 16:17:25.391078] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1508792 ] 00:06:39.974 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.974 [2024-07-15 16:17:25.464880] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.233 [2024-07-15 16:17:25.558986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.802 16:17:26 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:40.802 16:17:26 accel -- common/autotest_common.sh@862 -- # return 0 00:06:40.802 16:17:26 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:40.802 16:17:26 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:40.802 16:17:26 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:40.802 16:17:26 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:40.802 16:17:26 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:40.802 16:17:26 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:40.802 16:17:26 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:40.802 16:17:26 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.802 16:17:26 accel -- common/autotest_common.sh@10 -- # set +x 00:06:40.802 16:17:26 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.802 16:17:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:40.802 16:17:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:40.802 16:17:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:40.802 16:17:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:40.802 16:17:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:40.802 16:17:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:40.802 16:17:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:40.802 16:17:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:40.802 16:17:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:40.802 16:17:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:40.802 16:17:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:40.802 16:17:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:40.802 16:17:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:40.802 16:17:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:40.802 16:17:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:40.802 16:17:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:40.802 16:17:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:40.802 16:17:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:40.802 16:17:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:40.802 16:17:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:40.802 16:17:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:40.802 16:17:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:40.802 16:17:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:40.802 16:17:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:40.802 16:17:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:40.802 16:17:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:40.802 16:17:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:40.802 16:17:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:40.802 16:17:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:40.802 16:17:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:40.802 16:17:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:40.802 16:17:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:40.802 16:17:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:40.802 16:17:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:40.802 16:17:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:40.802 16:17:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:40.802 16:17:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:40.802 16:17:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:40.802 16:17:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:40.802 16:17:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:40.802 16:17:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:40.802 16:17:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:40.802 16:17:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:40.802 16:17:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:40.802 16:17:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:40.802 16:17:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:40.802 16:17:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:40.802 16:17:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:40.802 16:17:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:40.802 16:17:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:40.802 16:17:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:40.802 16:17:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:40.802 16:17:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:40.802 16:17:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:40.802 16:17:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:40.802 16:17:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:40.802 16:17:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:40.802 16:17:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:40.802 16:17:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:40.802 16:17:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:40.802 16:17:26 accel -- accel/accel.sh@75 -- # killprocess 1508792 00:06:40.802 16:17:26 accel -- common/autotest_common.sh@948 -- # '[' -z 1508792 ']' 00:06:40.802 16:17:26 accel -- common/autotest_common.sh@952 -- # kill -0 1508792 00:06:40.802 16:17:26 accel -- common/autotest_common.sh@953 -- # uname 00:06:40.802 16:17:26 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:40.802 16:17:26 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1508792 00:06:40.802 16:17:26 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:40.802 16:17:26 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:40.802 16:17:26 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1508792' 00:06:40.802 killing process with pid 1508792 00:06:40.802 16:17:26 accel -- common/autotest_common.sh@967 -- # kill 1508792 00:06:40.802 16:17:26 accel -- common/autotest_common.sh@972 -- # wait 1508792 00:06:41.372 16:17:26 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:41.372 16:17:26 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:41.372 16:17:26 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:41.372 16:17:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.372 16:17:26 accel -- common/autotest_common.sh@10 -- # set +x 00:06:41.372 16:17:26 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:41.372 16:17:26 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:41.372 16:17:26 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:41.372 16:17:26 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:41.372 16:17:26 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:41.372 16:17:26 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.372 16:17:26 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.372 16:17:26 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:41.372 16:17:26 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:41.372 16:17:26 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:41.372 16:17:26 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:41.372 16:17:26 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:41.372 16:17:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:41.372 16:17:26 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:41.372 16:17:26 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:41.372 16:17:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.372 16:17:26 accel -- common/autotest_common.sh@10 -- # set +x 00:06:41.372 ************************************ 00:06:41.372 START TEST accel_missing_filename 00:06:41.372 ************************************ 00:06:41.372 16:17:26 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:41.372 16:17:26 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:41.372 16:17:26 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:41.372 16:17:26 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:41.372 16:17:26 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:41.372 16:17:26 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:41.372 16:17:26 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:41.372 16:17:26 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:41.372 16:17:26 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:41.372 16:17:26 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:41.372 16:17:26 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:41.372 16:17:26 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:41.372 16:17:26 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.372 16:17:26 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.372 16:17:26 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:41.372 16:17:26 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:41.372 16:17:26 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:41.372 [2024-07-15 16:17:26.789612] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:06:41.372 [2024-07-15 16:17:26.789672] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1509030 ] 00:06:41.372 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.372 [2024-07-15 16:17:26.861356] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.372 [2024-07-15 16:17:26.945520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.631 [2024-07-15 16:17:26.992004] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:41.631 [2024-07-15 16:17:27.061416] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:41.631 A filename is required. 00:06:41.631 16:17:27 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:41.631 16:17:27 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:41.631 16:17:27 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:41.631 16:17:27 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:41.631 16:17:27 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:41.631 16:17:27 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:41.631 00:06:41.631 real 0m0.362s 00:06:41.631 user 0m0.250s 00:06:41.631 sys 0m0.148s 00:06:41.631 16:17:27 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:41.631 16:17:27 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:41.631 ************************************ 00:06:41.631 END TEST accel_missing_filename 00:06:41.631 ************************************ 00:06:41.631 16:17:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:41.631 16:17:27 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:41.631 16:17:27 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:41.631 16:17:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.631 16:17:27 accel -- common/autotest_common.sh@10 -- # set +x 00:06:41.891 ************************************ 00:06:41.891 START TEST accel_compress_verify 00:06:41.891 ************************************ 00:06:41.891 16:17:27 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:41.891 16:17:27 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:41.891 16:17:27 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:41.891 16:17:27 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:41.891 16:17:27 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:41.891 16:17:27 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:41.891 16:17:27 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:41.891 16:17:27 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:41.891 16:17:27 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:41.891 16:17:27 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:41.891 16:17:27 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:41.891 16:17:27 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:41.891 16:17:27 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.891 16:17:27 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.891 16:17:27 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:41.891 16:17:27 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:41.891 16:17:27 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:41.891 [2024-07-15 16:17:27.233027] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:06:41.891 [2024-07-15 16:17:27.233125] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1509136 ] 00:06:41.891 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.891 [2024-07-15 16:17:27.308987] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.891 [2024-07-15 16:17:27.392130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.891 [2024-07-15 16:17:27.434319] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:42.151 [2024-07-15 16:17:27.494370] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:42.151 00:06:42.151 Compression does not support the verify option, aborting. 00:06:42.151 16:17:27 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:42.151 16:17:27 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:42.151 16:17:27 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:42.151 16:17:27 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:42.151 16:17:27 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:42.151 16:17:27 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:42.151 00:06:42.151 real 0m0.356s 00:06:42.151 user 0m0.252s 00:06:42.151 sys 0m0.140s 00:06:42.151 16:17:27 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.151 16:17:27 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:42.151 ************************************ 00:06:42.151 END TEST accel_compress_verify 00:06:42.151 ************************************ 00:06:42.151 16:17:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:42.151 16:17:27 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:42.151 16:17:27 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:42.151 16:17:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.151 16:17:27 accel -- common/autotest_common.sh@10 -- # set +x 00:06:42.151 ************************************ 00:06:42.151 START TEST accel_wrong_workload 00:06:42.151 ************************************ 00:06:42.151 16:17:27 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:42.151 16:17:27 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:42.151 16:17:27 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:42.151 16:17:27 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:42.151 16:17:27 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:42.151 16:17:27 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:42.151 16:17:27 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:42.151 16:17:27 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:42.151 16:17:27 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:42.151 16:17:27 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:42.151 16:17:27 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:42.151 16:17:27 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:42.151 16:17:27 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.151 16:17:27 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.151 16:17:27 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:42.151 16:17:27 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:42.151 16:17:27 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:42.151 Unsupported workload type: foobar 00:06:42.151 [2024-07-15 16:17:27.659588] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:42.151 accel_perf options: 00:06:42.151 [-h help message] 00:06:42.151 [-q queue depth per core] 00:06:42.151 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:42.151 [-T number of threads per core 00:06:42.151 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:42.151 [-t time in seconds] 00:06:42.151 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:42.151 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:42.151 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:42.151 [-l for compress/decompress workloads, name of uncompressed input file 00:06:42.151 [-S for crc32c workload, use this seed value (default 0) 00:06:42.151 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:42.151 [-f for fill workload, use this BYTE value (default 255) 00:06:42.151 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:42.151 [-y verify result if this switch is on] 00:06:42.151 [-a tasks to allocate per core (default: same value as -q)] 00:06:42.151 Can be used to spread operations across a wider range of memory. 00:06:42.151 16:17:27 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:42.151 16:17:27 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:42.151 16:17:27 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:42.151 16:17:27 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:42.151 00:06:42.151 real 0m0.028s 00:06:42.151 user 0m0.017s 00:06:42.151 sys 0m0.011s 00:06:42.151 16:17:27 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.151 16:17:27 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:42.151 ************************************ 00:06:42.151 END TEST accel_wrong_workload 00:06:42.151 ************************************ 00:06:42.151 Error: writing output failed: Broken pipe 00:06:42.151 16:17:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:42.151 16:17:27 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:42.151 16:17:27 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:42.151 16:17:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.151 16:17:27 accel -- common/autotest_common.sh@10 -- # set +x 00:06:42.412 ************************************ 00:06:42.412 START TEST accel_negative_buffers 00:06:42.412 ************************************ 00:06:42.412 16:17:27 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:42.412 16:17:27 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:42.412 16:17:27 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:42.412 16:17:27 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:42.412 16:17:27 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:42.412 16:17:27 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:42.412 16:17:27 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:42.412 16:17:27 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:42.412 16:17:27 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:42.412 16:17:27 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:42.412 16:17:27 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:42.412 16:17:27 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:42.412 16:17:27 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.412 16:17:27 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.412 16:17:27 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:42.412 16:17:27 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:42.412 16:17:27 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:42.412 -x option must be non-negative. 00:06:42.412 [2024-07-15 16:17:27.755744] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:42.412 accel_perf options: 00:06:42.412 [-h help message] 00:06:42.412 [-q queue depth per core] 00:06:42.412 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:42.412 [-T number of threads per core 00:06:42.412 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:42.412 [-t time in seconds] 00:06:42.412 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:42.412 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:42.412 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:42.412 [-l for compress/decompress workloads, name of uncompressed input file 00:06:42.412 [-S for crc32c workload, use this seed value (default 0) 00:06:42.412 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:42.412 [-f for fill workload, use this BYTE value (default 255) 00:06:42.412 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:42.412 [-y verify result if this switch is on] 00:06:42.412 [-a tasks to allocate per core (default: same value as -q)] 00:06:42.412 Can be used to spread operations across a wider range of memory. 00:06:42.412 16:17:27 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:42.412 16:17:27 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:42.412 16:17:27 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:42.412 16:17:27 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:42.412 00:06:42.412 real 0m0.030s 00:06:42.412 user 0m0.012s 00:06:42.412 sys 0m0.018s 00:06:42.412 Error: writing output failed: Broken pipe 00:06:42.412 16:17:27 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.412 16:17:27 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:42.412 ************************************ 00:06:42.412 END TEST accel_negative_buffers 00:06:42.412 ************************************ 00:06:42.412 16:17:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:42.412 16:17:27 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:42.412 16:17:27 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:42.412 16:17:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.412 16:17:27 accel -- common/autotest_common.sh@10 -- # set +x 00:06:42.412 ************************************ 00:06:42.412 START TEST accel_crc32c 00:06:42.412 ************************************ 00:06:42.412 16:17:27 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:42.412 16:17:27 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:42.412 16:17:27 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:42.412 16:17:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.412 16:17:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.412 16:17:27 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:42.412 16:17:27 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:42.412 16:17:27 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:42.412 16:17:27 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:42.412 16:17:27 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:42.412 16:17:27 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.412 16:17:27 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.412 16:17:27 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:42.412 16:17:27 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:42.412 16:17:27 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:42.412 [2024-07-15 16:17:27.843245] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:06:42.412 [2024-07-15 16:17:27.843314] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1509203 ] 00:06:42.412 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.412 [2024-07-15 16:17:27.918489] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.672 [2024-07-15 16:17:28.003253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.672 16:17:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:44.050 16:17:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:44.050 16:17:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:44.050 16:17:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:44.050 16:17:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:44.050 16:17:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:44.050 16:17:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:44.050 16:17:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:44.050 16:17:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:44.050 16:17:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:44.050 16:17:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:44.050 16:17:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:44.050 16:17:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:44.050 16:17:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:44.050 16:17:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:44.050 16:17:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:44.050 16:17:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:44.050 16:17:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:44.050 16:17:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:44.050 16:17:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:44.050 16:17:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:44.050 16:17:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:44.050 16:17:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:44.050 16:17:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:44.050 16:17:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:44.050 16:17:29 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:44.050 16:17:29 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:44.050 16:17:29 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:44.050 00:06:44.050 real 0m1.381s 00:06:44.050 user 0m1.247s 00:06:44.050 sys 0m0.147s 00:06:44.050 16:17:29 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:44.050 16:17:29 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:44.050 ************************************ 00:06:44.050 END TEST accel_crc32c 00:06:44.050 ************************************ 00:06:44.050 16:17:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:44.050 16:17:29 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:44.050 16:17:29 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:44.050 16:17:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.050 16:17:29 accel -- common/autotest_common.sh@10 -- # set +x 00:06:44.050 ************************************ 00:06:44.050 START TEST accel_crc32c_C2 00:06:44.050 ************************************ 00:06:44.050 16:17:29 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:44.050 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:44.050 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:44.050 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:44.050 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.050 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:44.050 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.050 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:44.050 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:44.050 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:44.050 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.050 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.050 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:44.050 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:44.050 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:44.050 [2024-07-15 16:17:29.276310] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:06:44.050 [2024-07-15 16:17:29.276354] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1509402 ] 00:06:44.050 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.050 [2024-07-15 16:17:29.347616] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.050 [2024-07-15 16:17:29.431731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.050 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:44.050 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.050 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.050 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.050 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:44.050 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.050 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.050 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.050 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:44.050 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.050 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.050 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.050 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:44.050 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.050 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.050 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.050 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:44.050 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.050 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.050 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.050 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:44.050 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.050 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:44.050 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.050 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.050 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:44.050 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.050 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.050 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.050 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:44.050 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.050 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.050 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.050 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:44.050 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.050 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.050 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.050 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:44.050 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.050 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:44.050 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.050 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.050 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:44.050 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.050 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.050 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.050 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:44.050 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.050 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.050 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.050 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:44.050 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.051 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.051 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.051 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:44.051 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.051 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.051 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.051 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:44.051 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.051 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.051 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.051 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:44.051 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.051 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.051 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.051 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:44.051 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.051 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.051 16:17:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:45.426 16:17:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:45.426 16:17:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.426 16:17:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:45.426 16:17:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:45.426 16:17:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:45.426 16:17:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.426 16:17:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:45.426 16:17:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:45.426 16:17:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:45.426 16:17:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.426 16:17:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:45.426 16:17:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:45.426 16:17:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:45.426 16:17:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.426 16:17:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:45.426 16:17:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:45.426 16:17:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:45.426 16:17:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.426 16:17:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:45.426 16:17:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:45.426 16:17:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:45.426 16:17:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.426 16:17:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:45.426 16:17:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:45.426 16:17:30 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:45.426 16:17:30 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:45.426 16:17:30 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:45.426 00:06:45.426 real 0m1.366s 00:06:45.426 user 0m1.246s 00:06:45.426 sys 0m0.134s 00:06:45.426 16:17:30 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.426 16:17:30 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:45.426 ************************************ 00:06:45.426 END TEST accel_crc32c_C2 00:06:45.426 ************************************ 00:06:45.426 16:17:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:45.426 16:17:30 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:45.426 16:17:30 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:45.426 16:17:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.426 16:17:30 accel -- common/autotest_common.sh@10 -- # set +x 00:06:45.426 ************************************ 00:06:45.426 START TEST accel_copy 00:06:45.426 ************************************ 00:06:45.426 16:17:30 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:06:45.426 16:17:30 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:45.426 16:17:30 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:45.426 16:17:30 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:45.426 16:17:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:45.426 16:17:30 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:45.426 16:17:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:45.426 16:17:30 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:45.426 16:17:30 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:45.426 16:17:30 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:45.426 16:17:30 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.426 16:17:30 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.426 16:17:30 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:45.426 16:17:30 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:45.426 16:17:30 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:45.426 [2024-07-15 16:17:30.715937] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:06:45.426 [2024-07-15 16:17:30.715983] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1509597 ] 00:06:45.426 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.426 [2024-07-15 16:17:30.786888] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.426 [2024-07-15 16:17:30.873130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.426 16:17:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:45.426 16:17:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:45.426 16:17:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:45.426 16:17:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:45.426 16:17:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:45.426 16:17:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:45.426 16:17:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:45.426 16:17:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:45.426 16:17:30 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:45.426 16:17:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:45.426 16:17:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:45.426 16:17:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:45.426 16:17:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:45.426 16:17:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:45.426 16:17:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:45.426 16:17:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:45.426 16:17:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:45.427 16:17:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:45.427 16:17:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:45.427 16:17:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:45.427 16:17:30 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:45.427 16:17:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:45.427 16:17:30 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:45.427 16:17:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:45.427 16:17:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:45.427 16:17:30 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:45.427 16:17:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:45.427 16:17:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:45.427 16:17:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:45.427 16:17:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:45.427 16:17:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:45.427 16:17:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:45.427 16:17:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:45.427 16:17:30 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:45.427 16:17:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:45.427 16:17:30 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:45.427 16:17:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:45.427 16:17:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:45.427 16:17:30 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:45.427 16:17:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:45.427 16:17:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:45.427 16:17:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:45.427 16:17:30 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:45.427 16:17:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:45.427 16:17:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:45.427 16:17:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:45.427 16:17:30 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:45.427 16:17:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:45.427 16:17:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:45.427 16:17:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:45.427 16:17:30 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:45.427 16:17:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:45.427 16:17:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:45.427 16:17:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:45.427 16:17:30 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:45.427 16:17:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:45.427 16:17:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:45.427 16:17:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:45.427 16:17:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:45.427 16:17:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:45.427 16:17:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:45.427 16:17:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:45.427 16:17:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:45.427 16:17:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:45.427 16:17:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:45.427 16:17:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.802 16:17:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:46.802 16:17:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.802 16:17:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.802 16:17:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.802 16:17:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:46.802 16:17:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.802 16:17:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.802 16:17:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.802 16:17:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:46.802 16:17:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.802 16:17:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.802 16:17:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.802 16:17:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:46.802 16:17:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.802 16:17:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.802 16:17:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.802 16:17:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:46.802 16:17:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.802 16:17:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.802 16:17:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.802 16:17:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:46.802 16:17:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.802 16:17:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.802 16:17:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.802 16:17:32 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:46.802 16:17:32 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:46.802 16:17:32 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:46.802 00:06:46.802 real 0m1.365s 00:06:46.802 user 0m1.244s 00:06:46.802 sys 0m0.133s 00:06:46.802 16:17:32 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:46.802 16:17:32 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:46.802 ************************************ 00:06:46.802 END TEST accel_copy 00:06:46.802 ************************************ 00:06:46.802 16:17:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:46.802 16:17:32 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:46.802 16:17:32 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:46.802 16:17:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.802 16:17:32 accel -- common/autotest_common.sh@10 -- # set +x 00:06:46.803 ************************************ 00:06:46.803 START TEST accel_fill 00:06:46.803 ************************************ 00:06:46.803 16:17:32 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:46.803 [2024-07-15 16:17:32.155909] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:06:46.803 [2024-07-15 16:17:32.156003] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1509826 ] 00:06:46.803 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.803 [2024-07-15 16:17:32.237624] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.803 [2024-07-15 16:17:32.318556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:46.803 16:17:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:48.180 16:17:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:48.180 16:17:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:48.180 16:17:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:48.180 16:17:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:48.180 16:17:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:48.180 16:17:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:48.180 16:17:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:48.180 16:17:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:48.180 16:17:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:48.180 16:17:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:48.180 16:17:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:48.180 16:17:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:48.180 16:17:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:48.180 16:17:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:48.180 16:17:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:48.180 16:17:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:48.180 16:17:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:48.180 16:17:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:48.180 16:17:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:48.180 16:17:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:48.180 16:17:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:48.180 16:17:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:48.180 16:17:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:48.180 16:17:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:48.180 16:17:33 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:48.180 16:17:33 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:48.180 16:17:33 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:48.180 00:06:48.180 real 0m1.379s 00:06:48.180 user 0m1.241s 00:06:48.180 sys 0m0.151s 00:06:48.180 16:17:33 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.180 16:17:33 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:48.180 ************************************ 00:06:48.180 END TEST accel_fill 00:06:48.180 ************************************ 00:06:48.180 16:17:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:48.180 16:17:33 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:48.180 16:17:33 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:48.180 16:17:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.180 16:17:33 accel -- common/autotest_common.sh@10 -- # set +x 00:06:48.180 ************************************ 00:06:48.180 START TEST accel_copy_crc32c 00:06:48.180 ************************************ 00:06:48.180 16:17:33 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:06:48.180 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:48.180 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:48.180 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:48.180 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.180 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.180 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:48.180 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:48.180 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:48.180 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:48.180 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.180 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.180 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:48.180 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:48.180 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:48.180 [2024-07-15 16:17:33.586491] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:06:48.180 [2024-07-15 16:17:33.586558] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1510078 ] 00:06:48.180 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.180 [2024-07-15 16:17:33.660392] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.180 [2024-07-15 16:17:33.744252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.440 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:48.440 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.440 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.440 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.440 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:48.440 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.440 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.440 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.440 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:48.440 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.440 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.440 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.440 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:48.440 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.440 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.440 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.440 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:48.440 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.440 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.440 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.440 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:48.440 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.440 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:48.440 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.440 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.440 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:48.440 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.441 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.441 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.441 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:48.441 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.441 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.441 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.441 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:48.441 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.441 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.441 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.441 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:48.441 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.441 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.441 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.441 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:48.441 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.441 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:48.441 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.441 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.441 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:48.441 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.441 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.441 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.441 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:48.441 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.441 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.441 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.441 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:48.441 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.441 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.441 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.441 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:48.441 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.441 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.441 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.441 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:48.441 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.441 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.441 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.441 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:48.441 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.441 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.441 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.441 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:48.441 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.441 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.441 16:17:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.376 16:17:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:49.376 16:17:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.376 16:17:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.376 16:17:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.376 16:17:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:49.376 16:17:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.376 16:17:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.376 16:17:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.376 16:17:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:49.376 16:17:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.376 16:17:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.376 16:17:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.376 16:17:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:49.376 16:17:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.376 16:17:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.376 16:17:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.376 16:17:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:49.376 16:17:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.376 16:17:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.376 16:17:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.376 16:17:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:49.376 16:17:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.376 16:17:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.376 16:17:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.376 16:17:34 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:49.376 16:17:34 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:49.376 16:17:34 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:49.376 00:06:49.376 real 0m1.354s 00:06:49.376 user 0m1.228s 00:06:49.376 sys 0m0.141s 00:06:49.376 16:17:34 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.376 16:17:34 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:49.376 ************************************ 00:06:49.376 END TEST accel_copy_crc32c 00:06:49.376 ************************************ 00:06:49.636 16:17:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:49.636 16:17:34 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:49.636 16:17:34 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:49.636 16:17:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.636 16:17:34 accel -- common/autotest_common.sh@10 -- # set +x 00:06:49.636 ************************************ 00:06:49.636 START TEST accel_copy_crc32c_C2 00:06:49.636 ************************************ 00:06:49.636 16:17:34 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:49.636 16:17:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:49.636 16:17:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:49.636 16:17:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:49.636 16:17:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:49.636 16:17:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:49.636 16:17:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:49.636 16:17:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:49.636 16:17:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:49.636 16:17:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:49.636 16:17:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.636 16:17:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.636 16:17:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:49.636 16:17:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:49.636 16:17:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:49.636 [2024-07-15 16:17:35.012216] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:06:49.636 [2024-07-15 16:17:35.012306] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1510316 ] 00:06:49.636 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.636 [2024-07-15 16:17:35.086305] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.636 [2024-07-15 16:17:35.170541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.895 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:49.895 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.895 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:49.895 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:49.895 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:49.895 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.895 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:49.895 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:49.895 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:49.895 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.895 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:49.895 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:49.895 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:49.895 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.895 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:49.895 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:49.895 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:49.895 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.895 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:49.895 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:49.895 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:49.895 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.895 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:49.895 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:49.895 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:49.895 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:49.895 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.895 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:49.895 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:49.895 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:49.895 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.895 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:49.895 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:49.895 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:49.895 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.895 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:49.895 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:49.895 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:49.895 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.895 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:49.895 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:49.895 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:49.895 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.895 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:49.895 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:49.895 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:49.895 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:49.895 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.895 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:49.895 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:49.895 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:49.895 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.895 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:49.895 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:49.895 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:49.895 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.895 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:49.895 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:49.895 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:49.895 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.895 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:49.895 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:49.895 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:49.895 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.895 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:49.895 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:49.895 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:49.895 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.895 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:49.896 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:49.896 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:49.896 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.896 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:49.896 16:17:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.832 16:17:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:50.832 16:17:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.832 16:17:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.832 16:17:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.832 16:17:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:50.832 16:17:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.833 16:17:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.833 16:17:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.833 16:17:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:50.833 16:17:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.833 16:17:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.833 16:17:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.833 16:17:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:50.833 16:17:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.833 16:17:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.833 16:17:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.833 16:17:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:50.833 16:17:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.833 16:17:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.833 16:17:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.833 16:17:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:50.833 16:17:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.833 16:17:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.833 16:17:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.833 16:17:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:50.833 16:17:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:50.833 16:17:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:50.833 00:06:50.833 real 0m1.377s 00:06:50.833 user 0m1.245s 00:06:50.833 sys 0m0.145s 00:06:50.833 16:17:36 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.833 16:17:36 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:50.833 ************************************ 00:06:50.833 END TEST accel_copy_crc32c_C2 00:06:50.833 ************************************ 00:06:50.833 16:17:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:51.092 16:17:36 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:51.092 16:17:36 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:51.092 16:17:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.092 16:17:36 accel -- common/autotest_common.sh@10 -- # set +x 00:06:51.092 ************************************ 00:06:51.092 START TEST accel_dualcast 00:06:51.092 ************************************ 00:06:51.092 16:17:36 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:06:51.092 16:17:36 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:51.092 16:17:36 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:51.092 16:17:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:51.092 16:17:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:51.092 16:17:36 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:51.092 16:17:36 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:51.092 16:17:36 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:51.092 16:17:36 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:51.092 16:17:36 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:51.092 16:17:36 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.092 16:17:36 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.092 16:17:36 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:51.092 16:17:36 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:51.092 16:17:36 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:51.092 [2024-07-15 16:17:36.469711] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:06:51.093 [2024-07-15 16:17:36.469798] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1510541 ] 00:06:51.093 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.093 [2024-07-15 16:17:36.546830] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.093 [2024-07-15 16:17:36.629826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.352 16:17:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:51.352 16:17:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:51.352 16:17:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:51.352 16:17:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:51.352 16:17:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:51.352 16:17:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:51.352 16:17:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:51.352 16:17:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:51.352 16:17:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:51.352 16:17:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:51.352 16:17:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:51.352 16:17:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:51.352 16:17:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:51.352 16:17:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:51.352 16:17:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:51.352 16:17:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:51.352 16:17:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:51.352 16:17:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:51.352 16:17:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:51.352 16:17:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:51.352 16:17:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:51.352 16:17:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:51.352 16:17:36 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:51.352 16:17:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:51.352 16:17:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:51.352 16:17:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:51.352 16:17:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:51.352 16:17:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:51.352 16:17:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:51.352 16:17:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:51.352 16:17:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:51.352 16:17:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:51.352 16:17:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:51.352 16:17:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:51.352 16:17:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:51.352 16:17:36 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:51.352 16:17:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:51.352 16:17:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:51.352 16:17:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:51.352 16:17:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:51.352 16:17:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:51.352 16:17:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:51.352 16:17:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:51.352 16:17:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:51.352 16:17:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:51.352 16:17:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:51.352 16:17:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:51.352 16:17:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:51.352 16:17:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:51.352 16:17:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:51.352 16:17:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:51.352 16:17:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:51.352 16:17:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:51.352 16:17:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:51.352 16:17:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:51.352 16:17:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:51.352 16:17:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:51.352 16:17:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:51.352 16:17:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:51.352 16:17:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:51.352 16:17:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:51.352 16:17:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:51.352 16:17:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:51.352 16:17:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:51.352 16:17:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:51.352 16:17:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:52.290 16:17:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:52.290 16:17:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:52.290 16:17:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:52.290 16:17:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:52.290 16:17:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:52.290 16:17:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:52.290 16:17:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:52.290 16:17:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:52.290 16:17:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:52.290 16:17:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:52.290 16:17:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:52.290 16:17:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:52.290 16:17:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:52.290 16:17:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:52.290 16:17:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:52.290 16:17:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:52.290 16:17:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:52.290 16:17:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:52.290 16:17:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:52.290 16:17:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:52.290 16:17:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:52.290 16:17:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:52.290 16:17:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:52.290 16:17:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:52.290 16:17:37 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:52.290 16:17:37 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:52.290 16:17:37 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:52.290 00:06:52.290 real 0m1.383s 00:06:52.290 user 0m1.235s 00:06:52.290 sys 0m0.161s 00:06:52.290 16:17:37 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.290 16:17:37 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:52.290 ************************************ 00:06:52.290 END TEST accel_dualcast 00:06:52.290 ************************************ 00:06:52.549 16:17:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:52.549 16:17:37 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:52.549 16:17:37 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:52.549 16:17:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.549 16:17:37 accel -- common/autotest_common.sh@10 -- # set +x 00:06:52.549 ************************************ 00:06:52.549 START TEST accel_compare 00:06:52.549 ************************************ 00:06:52.549 16:17:37 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:06:52.549 16:17:37 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:52.549 16:17:37 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:52.549 16:17:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:52.549 16:17:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:52.549 16:17:37 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:52.549 16:17:37 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:52.549 16:17:37 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:52.549 16:17:37 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:52.549 16:17:37 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:52.549 16:17:37 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.549 16:17:37 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.549 16:17:37 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:52.549 16:17:37 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:52.549 16:17:37 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:52.549 [2024-07-15 16:17:37.929578] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:06:52.549 [2024-07-15 16:17:37.929667] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1510732 ] 00:06:52.549 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.549 [2024-07-15 16:17:38.002489] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.549 [2024-07-15 16:17:38.085342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.809 16:17:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:52.809 16:17:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:52.809 16:17:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:52.809 16:17:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:52.809 16:17:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:52.809 16:17:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:52.809 16:17:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:52.809 16:17:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:52.809 16:17:38 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:52.809 16:17:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:52.809 16:17:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:52.809 16:17:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:52.809 16:17:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:52.809 16:17:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:52.809 16:17:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:52.809 16:17:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:52.809 16:17:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:52.809 16:17:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:52.809 16:17:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:52.809 16:17:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:52.809 16:17:38 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:52.809 16:17:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:52.809 16:17:38 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:52.809 16:17:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:52.809 16:17:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:52.809 16:17:38 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:52.809 16:17:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:52.809 16:17:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:52.809 16:17:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:52.809 16:17:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:52.809 16:17:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:52.809 16:17:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:52.809 16:17:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:52.809 16:17:38 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:52.809 16:17:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:52.809 16:17:38 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:52.809 16:17:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:52.809 16:17:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:52.809 16:17:38 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:52.809 16:17:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:52.809 16:17:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:52.809 16:17:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:52.809 16:17:38 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:52.809 16:17:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:52.809 16:17:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:52.809 16:17:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:52.809 16:17:38 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:52.809 16:17:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:52.809 16:17:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:52.809 16:17:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:52.809 16:17:38 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:52.809 16:17:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:52.809 16:17:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:52.809 16:17:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:52.809 16:17:38 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:52.809 16:17:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:52.809 16:17:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:52.809 16:17:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:52.809 16:17:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:52.809 16:17:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:52.809 16:17:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:52.809 16:17:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:52.809 16:17:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:52.809 16:17:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:52.809 16:17:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:52.809 16:17:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:53.746 16:17:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:53.746 16:17:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:53.746 16:17:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:53.746 16:17:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:53.746 16:17:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:53.746 16:17:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:53.746 16:17:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:53.746 16:17:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:53.746 16:17:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:53.746 16:17:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:53.746 16:17:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:53.746 16:17:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:53.746 16:17:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:53.746 16:17:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:53.746 16:17:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:53.746 16:17:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:53.746 16:17:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:53.746 16:17:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:53.746 16:17:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:53.746 16:17:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:53.746 16:17:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:53.746 16:17:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:53.746 16:17:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:53.746 16:17:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:53.746 16:17:39 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:53.746 16:17:39 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:53.746 16:17:39 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:53.746 00:06:53.746 real 0m1.377s 00:06:53.746 user 0m1.244s 00:06:53.746 sys 0m0.144s 00:06:53.746 16:17:39 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:53.746 16:17:39 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:53.746 ************************************ 00:06:53.746 END TEST accel_compare 00:06:53.746 ************************************ 00:06:54.005 16:17:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:54.005 16:17:39 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:54.005 16:17:39 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:54.005 16:17:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.005 16:17:39 accel -- common/autotest_common.sh@10 -- # set +x 00:06:54.005 ************************************ 00:06:54.005 START TEST accel_xor 00:06:54.005 ************************************ 00:06:54.005 16:17:39 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:06:54.005 16:17:39 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:54.005 16:17:39 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:54.005 16:17:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.005 16:17:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.005 16:17:39 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:54.005 16:17:39 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:54.005 16:17:39 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:54.005 16:17:39 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:54.005 16:17:39 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:54.005 16:17:39 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.005 16:17:39 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.005 16:17:39 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:54.005 16:17:39 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:54.005 16:17:39 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:54.005 [2024-07-15 16:17:39.389628] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:06:54.005 [2024-07-15 16:17:39.389702] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1510931 ] 00:06:54.005 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.005 [2024-07-15 16:17:39.466957] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.005 [2024-07-15 16:17:39.556706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.265 16:17:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:55.202 16:17:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:55.202 16:17:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:55.202 16:17:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:55.202 16:17:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:55.202 16:17:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:55.202 16:17:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:55.202 16:17:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:55.202 16:17:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:55.202 16:17:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:55.202 16:17:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:55.202 16:17:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:55.202 16:17:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:55.202 16:17:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:55.202 16:17:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:55.202 16:17:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:55.202 16:17:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:55.202 16:17:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:55.202 16:17:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:55.202 16:17:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:55.202 16:17:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:55.202 16:17:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:55.202 16:17:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:55.202 16:17:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:55.202 16:17:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:55.202 16:17:40 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:55.202 16:17:40 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:55.202 16:17:40 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:55.202 00:06:55.202 real 0m1.390s 00:06:55.202 user 0m1.247s 00:06:55.202 sys 0m0.156s 00:06:55.202 16:17:40 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.202 16:17:40 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:55.202 ************************************ 00:06:55.202 END TEST accel_xor 00:06:55.202 ************************************ 00:06:55.462 16:17:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:55.462 16:17:40 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:55.462 16:17:40 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:55.462 16:17:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.462 16:17:40 accel -- common/autotest_common.sh@10 -- # set +x 00:06:55.462 ************************************ 00:06:55.462 START TEST accel_xor 00:06:55.462 ************************************ 00:06:55.462 16:17:40 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:06:55.462 16:17:40 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:55.462 16:17:40 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:55.462 16:17:40 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:55.462 16:17:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:55.462 16:17:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:55.462 16:17:40 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:55.462 16:17:40 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:55.462 16:17:40 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:55.462 16:17:40 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:55.462 16:17:40 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.462 16:17:40 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.462 16:17:40 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:55.462 16:17:40 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:55.462 16:17:40 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:55.462 [2024-07-15 16:17:40.834953] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:06:55.462 [2024-07-15 16:17:40.835011] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1511130 ] 00:06:55.462 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.462 [2024-07-15 16:17:40.906208] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.462 [2024-07-15 16:17:40.985596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.462 16:17:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:55.462 16:17:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:55.462 16:17:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:55.462 16:17:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:55.462 16:17:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:55.462 16:17:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:55.462 16:17:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:55.462 16:17:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:55.462 16:17:41 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:55.462 16:17:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:55.462 16:17:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:55.462 16:17:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:55.462 16:17:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:55.462 16:17:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:55.462 16:17:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:55.462 16:17:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:55.462 16:17:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:55.462 16:17:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:55.462 16:17:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:55.462 16:17:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:55.462 16:17:41 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:55.462 16:17:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:55.462 16:17:41 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:55.462 16:17:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:55.462 16:17:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:55.462 16:17:41 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:55.462 16:17:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:55.462 16:17:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:55.462 16:17:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:55.462 16:17:41 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:55.462 16:17:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:55.462 16:17:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:55.462 16:17:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:55.462 16:17:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:55.462 16:17:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:55.462 16:17:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:55.462 16:17:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:55.462 16:17:41 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:55.462 16:17:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:55.462 16:17:41 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:55.462 16:17:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:55.462 16:17:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:55.462 16:17:41 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:55.462 16:17:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:55.462 16:17:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:55.462 16:17:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:55.462 16:17:41 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:55.462 16:17:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:55.462 16:17:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:55.462 16:17:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:55.462 16:17:41 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:55.462 16:17:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:55.462 16:17:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:55.462 16:17:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:55.462 16:17:41 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:55.462 16:17:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:55.462 16:17:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:55.462 16:17:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:55.462 16:17:41 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:55.462 16:17:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:55.462 16:17:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:55.462 16:17:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:55.462 16:17:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:55.721 16:17:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:55.721 16:17:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:55.721 16:17:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:55.721 16:17:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:55.721 16:17:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:55.721 16:17:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:55.721 16:17:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.659 16:17:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:56.659 16:17:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.659 16:17:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.659 16:17:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.659 16:17:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:56.659 16:17:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.659 16:17:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.659 16:17:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.659 16:17:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:56.659 16:17:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.659 16:17:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.659 16:17:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.659 16:17:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:56.659 16:17:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.659 16:17:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.659 16:17:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.659 16:17:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:56.659 16:17:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.659 16:17:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.659 16:17:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.659 16:17:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:56.659 16:17:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.659 16:17:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.659 16:17:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.659 16:17:42 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:56.659 16:17:42 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:56.659 16:17:42 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:56.659 00:06:56.659 real 0m1.351s 00:06:56.659 user 0m1.235s 00:06:56.659 sys 0m0.129s 00:06:56.659 16:17:42 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:56.659 16:17:42 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:56.659 ************************************ 00:06:56.659 END TEST accel_xor 00:06:56.659 ************************************ 00:06:56.659 16:17:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:56.659 16:17:42 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:56.659 16:17:42 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:56.659 16:17:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.659 16:17:42 accel -- common/autotest_common.sh@10 -- # set +x 00:06:56.919 ************************************ 00:06:56.919 START TEST accel_dif_verify 00:06:56.919 ************************************ 00:06:56.919 16:17:42 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:56.919 [2024-07-15 16:17:42.249845] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:06:56.919 [2024-07-15 16:17:42.249892] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1511323 ] 00:06:56.919 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.919 [2024-07-15 16:17:42.314844] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.919 [2024-07-15 16:17:42.396380] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:56.919 16:17:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:58.298 16:17:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:58.298 16:17:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:58.298 16:17:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:58.298 16:17:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:58.298 16:17:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:58.298 16:17:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:58.298 16:17:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:58.298 16:17:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:58.298 16:17:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:58.298 16:17:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:58.298 16:17:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:58.298 16:17:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:58.298 16:17:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:58.298 16:17:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:58.298 16:17:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:58.298 16:17:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:58.298 16:17:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:58.298 16:17:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:58.298 16:17:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:58.298 16:17:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:58.298 16:17:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:58.298 16:17:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:58.298 16:17:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:58.298 16:17:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:58.298 16:17:43 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:58.298 16:17:43 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:58.298 16:17:43 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:58.298 00:06:58.298 real 0m1.345s 00:06:58.298 user 0m1.227s 00:06:58.298 sys 0m0.133s 00:06:58.298 16:17:43 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.298 16:17:43 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:58.298 ************************************ 00:06:58.298 END TEST accel_dif_verify 00:06:58.298 ************************************ 00:06:58.298 16:17:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:58.298 16:17:43 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:58.298 16:17:43 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:58.298 16:17:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.298 16:17:43 accel -- common/autotest_common.sh@10 -- # set +x 00:06:58.298 ************************************ 00:06:58.298 START TEST accel_dif_generate 00:06:58.298 ************************************ 00:06:58.298 16:17:43 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:06:58.298 16:17:43 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:58.298 16:17:43 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:58.298 16:17:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:58.298 16:17:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:58.298 16:17:43 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:58.298 16:17:43 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:58.298 16:17:43 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:58.298 16:17:43 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.298 16:17:43 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.298 16:17:43 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.298 16:17:43 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.298 16:17:43 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:58.298 16:17:43 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:58.298 16:17:43 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:58.298 [2024-07-15 16:17:43.671389] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:06:58.298 [2024-07-15 16:17:43.671456] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1511521 ] 00:06:58.298 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.298 [2024-07-15 16:17:43.746215] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.298 [2024-07-15 16:17:43.830000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.582 16:17:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:58.582 16:17:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:58.582 16:17:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:58.582 16:17:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:58.582 16:17:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:58.582 16:17:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:58.582 16:17:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:58.582 16:17:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:58.582 16:17:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:58.582 16:17:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:58.582 16:17:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:58.582 16:17:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:58.582 16:17:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:58.582 16:17:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:58.582 16:17:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:58.582 16:17:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:58.582 16:17:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:58.582 16:17:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:58.582 16:17:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:58.582 16:17:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:58.582 16:17:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:58.582 16:17:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:58.582 16:17:43 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:58.582 16:17:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:58.582 16:17:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:58.582 16:17:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:58.582 16:17:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:58.582 16:17:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:58.582 16:17:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:58.582 16:17:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:58.582 16:17:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:58.582 16:17:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:58.582 16:17:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:58.582 16:17:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:58.582 16:17:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:58.582 16:17:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:58.582 16:17:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:58.582 16:17:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:58.582 16:17:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:58.582 16:17:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:58.582 16:17:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:58.582 16:17:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:58.582 16:17:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:58.582 16:17:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:58.582 16:17:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:58.582 16:17:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:58.582 16:17:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:58.582 16:17:43 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:58.582 16:17:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:58.582 16:17:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:58.582 16:17:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:58.582 16:17:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:58.582 16:17:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:58.582 16:17:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:58.582 16:17:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:58.582 16:17:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:58.583 16:17:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:58.583 16:17:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:58.583 16:17:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:58.583 16:17:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:58.583 16:17:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:58.583 16:17:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:58.583 16:17:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:58.583 16:17:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:58.583 16:17:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:58.583 16:17:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:58.583 16:17:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:58.583 16:17:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:58.583 16:17:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:58.583 16:17:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:58.583 16:17:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:58.583 16:17:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:58.583 16:17:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:58.583 16:17:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:58.583 16:17:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:58.583 16:17:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:58.583 16:17:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:58.583 16:17:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:59.589 16:17:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:59.589 16:17:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:59.589 16:17:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:59.589 16:17:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:59.589 16:17:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:59.589 16:17:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:59.589 16:17:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:59.589 16:17:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:59.589 16:17:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:59.589 16:17:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:59.589 16:17:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:59.589 16:17:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:59.589 16:17:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:59.589 16:17:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:59.589 16:17:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:59.589 16:17:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:59.589 16:17:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:59.589 16:17:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:59.589 16:17:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:59.589 16:17:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:59.589 16:17:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:59.589 16:17:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:59.589 16:17:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:59.589 16:17:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:59.589 16:17:45 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:59.589 16:17:45 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:59.589 16:17:45 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.589 00:06:59.589 real 0m1.381s 00:06:59.589 user 0m1.247s 00:06:59.589 sys 0m0.149s 00:06:59.589 16:17:45 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.589 16:17:45 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:59.589 ************************************ 00:06:59.589 END TEST accel_dif_generate 00:06:59.589 ************************************ 00:06:59.589 16:17:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:59.589 16:17:45 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:59.589 16:17:45 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:59.589 16:17:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.589 16:17:45 accel -- common/autotest_common.sh@10 -- # set +x 00:06:59.589 ************************************ 00:06:59.589 START TEST accel_dif_generate_copy 00:06:59.589 ************************************ 00:06:59.589 16:17:45 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:06:59.589 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:59.589 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:59.589 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.589 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.589 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:59.589 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:59.589 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:59.589 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:59.589 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:59.589 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.589 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.589 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:59.589 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:59.589 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:59.589 [2024-07-15 16:17:45.138405] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:06:59.589 [2024-07-15 16:17:45.138493] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1511723 ] 00:06:59.848 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.848 [2024-07-15 16:17:45.215819] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.848 [2024-07-15 16:17:45.304595] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.848 16:17:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.225 16:17:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:01.225 16:17:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.225 16:17:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.225 16:17:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.225 16:17:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:01.225 16:17:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.225 16:17:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.225 16:17:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.225 16:17:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:01.225 16:17:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.225 16:17:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.225 16:17:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.225 16:17:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:01.225 16:17:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.225 16:17:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.225 16:17:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.225 16:17:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:01.225 16:17:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.225 16:17:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.225 16:17:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.225 16:17:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:01.225 16:17:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.225 16:17:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.225 16:17:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.225 16:17:46 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:01.225 16:17:46 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:01.225 16:17:46 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.225 00:07:01.225 real 0m1.391s 00:07:01.225 user 0m1.255s 00:07:01.225 sys 0m0.149s 00:07:01.225 16:17:46 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:01.225 16:17:46 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:01.225 ************************************ 00:07:01.225 END TEST accel_dif_generate_copy 00:07:01.225 ************************************ 00:07:01.225 16:17:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:01.225 16:17:46 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:01.225 16:17:46 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:01.225 16:17:46 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:01.225 16:17:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.225 16:17:46 accel -- common/autotest_common.sh@10 -- # set +x 00:07:01.225 ************************************ 00:07:01.225 START TEST accel_comp 00:07:01.225 ************************************ 00:07:01.225 16:17:46 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:01.225 16:17:46 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:01.225 16:17:46 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:01.225 16:17:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:01.225 16:17:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:01.225 16:17:46 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:01.225 16:17:46 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:01.226 16:17:46 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:01.226 16:17:46 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:01.226 16:17:46 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:01.226 16:17:46 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.226 16:17:46 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.226 16:17:46 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:01.226 16:17:46 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:01.226 16:17:46 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:01.226 [2024-07-15 16:17:46.611655] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:07:01.226 [2024-07-15 16:17:46.611737] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1511921 ] 00:07:01.226 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.226 [2024-07-15 16:17:46.686870] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.226 [2024-07-15 16:17:46.769247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:01.486 16:17:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:02.423 16:17:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:02.423 16:17:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.423 16:17:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:02.423 16:17:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:02.423 16:17:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:02.423 16:17:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.423 16:17:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:02.423 16:17:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:02.423 16:17:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:02.423 16:17:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.423 16:17:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:02.423 16:17:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:02.423 16:17:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:02.423 16:17:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.423 16:17:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:02.423 16:17:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:02.423 16:17:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:02.423 16:17:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.423 16:17:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:02.423 16:17:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:02.423 16:17:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:02.423 16:17:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.423 16:17:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:02.423 16:17:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:02.423 16:17:47 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:02.423 16:17:47 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:02.423 16:17:47 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.423 00:07:02.423 real 0m1.383s 00:07:02.423 user 0m1.254s 00:07:02.423 sys 0m0.142s 00:07:02.423 16:17:47 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.423 16:17:47 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:02.423 ************************************ 00:07:02.423 END TEST accel_comp 00:07:02.423 ************************************ 00:07:02.682 16:17:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:02.682 16:17:48 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:07:02.682 16:17:48 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:02.682 16:17:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.682 16:17:48 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.682 ************************************ 00:07:02.682 START TEST accel_decomp 00:07:02.682 ************************************ 00:07:02.682 16:17:48 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:07:02.682 16:17:48 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:02.682 16:17:48 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:02.682 16:17:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:02.682 16:17:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:02.682 16:17:48 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:07:02.682 16:17:48 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:07:02.682 16:17:48 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:02.682 16:17:48 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:02.682 16:17:48 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:02.682 16:17:48 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.682 16:17:48 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.682 16:17:48 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:02.682 16:17:48 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:02.682 16:17:48 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:02.682 [2024-07-15 16:17:48.075692] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:07:02.682 [2024-07-15 16:17:48.075785] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1512114 ] 00:07:02.682 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.682 [2024-07-15 16:17:48.152941] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.682 [2024-07-15 16:17:48.239892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:02.941 16:17:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:03.878 16:17:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:03.878 16:17:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:03.878 16:17:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:03.878 16:17:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:03.878 16:17:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:03.878 16:17:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:03.878 16:17:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:03.878 16:17:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:03.878 16:17:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:03.878 16:17:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:03.878 16:17:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:03.878 16:17:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:03.878 16:17:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:03.878 16:17:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:03.878 16:17:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:03.878 16:17:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:03.878 16:17:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:03.878 16:17:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:03.878 16:17:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:03.878 16:17:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:03.878 16:17:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:03.878 16:17:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:03.878 16:17:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:03.878 16:17:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:03.878 16:17:49 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:03.878 16:17:49 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:03.878 16:17:49 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.878 00:07:03.878 real 0m1.390s 00:07:03.878 user 0m1.251s 00:07:03.878 sys 0m0.153s 00:07:03.878 16:17:49 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:03.878 16:17:49 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:03.878 ************************************ 00:07:03.878 END TEST accel_decomp 00:07:03.878 ************************************ 00:07:04.138 16:17:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:04.138 16:17:49 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:04.138 16:17:49 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:04.138 16:17:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.138 16:17:49 accel -- common/autotest_common.sh@10 -- # set +x 00:07:04.138 ************************************ 00:07:04.138 START TEST accel_decomp_full 00:07:04.138 ************************************ 00:07:04.138 16:17:49 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:04.138 16:17:49 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:04.138 16:17:49 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:04.138 16:17:49 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:04.138 16:17:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:04.138 16:17:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:04.138 16:17:49 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:04.138 16:17:49 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:04.138 16:17:49 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:04.138 16:17:49 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:04.138 16:17:49 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.138 16:17:49 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.138 16:17:49 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:04.138 16:17:49 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:04.138 16:17:49 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:04.138 [2024-07-15 16:17:49.507636] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:07:04.138 [2024-07-15 16:17:49.507697] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1512313 ] 00:07:04.138 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.138 [2024-07-15 16:17:49.580355] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.138 [2024-07-15 16:17:49.660433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.138 16:17:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:04.138 16:17:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:04.138 16:17:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:04.138 16:17:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:04.138 16:17:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:04.138 16:17:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:04.138 16:17:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:04.138 16:17:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:04.138 16:17:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:04.138 16:17:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:04.138 16:17:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:04.138 16:17:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:04.138 16:17:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:04.138 16:17:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:04.138 16:17:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:04.138 16:17:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:04.138 16:17:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:04.138 16:17:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:04.138 16:17:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:04.138 16:17:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:04.138 16:17:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:04.138 16:17:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:04.138 16:17:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:04.138 16:17:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:04.138 16:17:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:04.138 16:17:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:04.138 16:17:49 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:04.138 16:17:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:04.138 16:17:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:04.138 16:17:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:04.138 16:17:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:04.138 16:17:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:04.138 16:17:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:04.138 16:17:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:04.138 16:17:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:04.138 16:17:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:04.138 16:17:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:04.138 16:17:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:07:04.138 16:17:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:04.138 16:17:49 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:07:04.138 16:17:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:04.138 16:17:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:04.138 16:17:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:04.138 16:17:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:04.138 16:17:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:04.138 16:17:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:04.138 16:17:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:04.138 16:17:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:04.139 16:17:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:04.139 16:17:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:04.139 16:17:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:04.139 16:17:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:04.139 16:17:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:04.139 16:17:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:04.139 16:17:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:04.397 16:17:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:04.397 16:17:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:04.397 16:17:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:04.397 16:17:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:04.397 16:17:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:04.397 16:17:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:04.397 16:17:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:04.397 16:17:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:04.397 16:17:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:04.397 16:17:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:04.397 16:17:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:04.397 16:17:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:04.397 16:17:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:04.397 16:17:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:04.397 16:17:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:04.397 16:17:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:04.397 16:17:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:04.397 16:17:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:04.397 16:17:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:05.332 16:17:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:05.332 16:17:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:05.332 16:17:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:05.332 16:17:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:05.332 16:17:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:05.332 16:17:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:05.332 16:17:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:05.332 16:17:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:05.332 16:17:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:05.332 16:17:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:05.332 16:17:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:05.332 16:17:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:05.332 16:17:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:05.332 16:17:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:05.332 16:17:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:05.332 16:17:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:05.332 16:17:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:05.332 16:17:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:05.332 16:17:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:05.332 16:17:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:05.332 16:17:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:05.332 16:17:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:05.332 16:17:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:05.332 16:17:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:05.332 16:17:50 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:05.332 16:17:50 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:05.332 16:17:50 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.332 00:07:05.332 real 0m1.365s 00:07:05.332 user 0m1.246s 00:07:05.332 sys 0m0.132s 00:07:05.332 16:17:50 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.332 16:17:50 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:05.332 ************************************ 00:07:05.332 END TEST accel_decomp_full 00:07:05.332 ************************************ 00:07:05.332 16:17:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:05.332 16:17:50 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:05.332 16:17:50 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:05.332 16:17:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.332 16:17:50 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.591 ************************************ 00:07:05.591 START TEST accel_decomp_mcore 00:07:05.591 ************************************ 00:07:05.591 16:17:50 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:05.591 16:17:50 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:05.591 16:17:50 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:05.591 16:17:50 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:05.591 16:17:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.591 16:17:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.591 16:17:50 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:05.591 16:17:50 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:05.591 16:17:50 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:05.591 16:17:50 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:05.591 16:17:50 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.591 16:17:50 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.591 16:17:50 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:05.591 16:17:50 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:05.591 16:17:50 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:05.591 [2024-07-15 16:17:50.950831] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:07:05.591 [2024-07-15 16:17:50.950891] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1512517 ] 00:07:05.591 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.591 [2024-07-15 16:17:51.024201] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:05.591 [2024-07-15 16:17:51.110782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.591 [2024-07-15 16:17:51.110871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:05.591 [2024-07-15 16:17:51.110946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:05.591 [2024-07-15 16:17:51.110947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.591 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:05.591 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.591 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.591 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.591 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:05.591 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.591 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.591 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.591 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:05.591 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.591 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.591 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.591 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:05.591 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.591 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.591 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.591 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:05.591 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.591 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.591 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.591 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:05.591 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.591 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.591 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.591 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:05.591 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.591 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:05.591 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.591 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.591 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:05.850 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.850 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.850 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.850 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:05.850 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.850 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.850 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.850 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:05.850 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.850 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:05.850 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.850 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.850 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:05.850 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.850 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.850 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.850 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:05.850 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.850 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.850 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.850 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:05.850 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.850 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.850 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.850 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:05.850 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.850 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.850 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.850 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:05.850 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.850 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.850 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.850 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:05.850 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.850 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.850 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.850 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:05.850 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.850 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.850 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.850 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:05.850 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.850 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.850 16:17:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:06.785 16:17:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:06.785 16:17:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:06.785 16:17:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:06.785 16:17:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:06.785 16:17:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:06.785 16:17:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:06.785 16:17:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:06.785 16:17:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:06.785 16:17:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:06.785 16:17:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:06.785 16:17:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:06.785 16:17:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:06.785 16:17:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:06.785 16:17:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:06.785 16:17:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:06.785 16:17:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:06.785 16:17:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:06.785 16:17:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:06.785 16:17:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:06.785 16:17:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:06.785 16:17:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:06.785 16:17:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:06.785 16:17:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:06.785 16:17:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:06.785 16:17:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:06.785 16:17:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:06.785 16:17:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:06.785 16:17:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:06.785 16:17:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:06.785 16:17:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:06.785 16:17:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:06.785 16:17:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:06.785 16:17:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:06.785 16:17:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:06.785 16:17:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:06.785 16:17:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:06.785 16:17:52 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:06.785 16:17:52 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:06.785 16:17:52 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.785 00:07:06.785 real 0m1.377s 00:07:06.785 user 0m4.594s 00:07:06.785 sys 0m0.152s 00:07:06.785 16:17:52 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:06.785 16:17:52 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:06.785 ************************************ 00:07:06.785 END TEST accel_decomp_mcore 00:07:06.785 ************************************ 00:07:06.785 16:17:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:06.785 16:17:52 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:06.785 16:17:52 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:06.785 16:17:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.785 16:17:52 accel -- common/autotest_common.sh@10 -- # set +x 00:07:07.044 ************************************ 00:07:07.044 START TEST accel_decomp_full_mcore 00:07:07.045 ************************************ 00:07:07.045 16:17:52 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:07.045 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:07.045 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:07.045 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:07.045 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.045 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.045 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:07.045 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:07.045 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:07.045 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:07.045 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.045 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.045 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:07.045 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:07.045 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:07.045 [2024-07-15 16:17:52.413206] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:07:07.045 [2024-07-15 16:17:52.413268] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1512760 ] 00:07:07.045 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.045 [2024-07-15 16:17:52.482745] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:07.045 [2024-07-15 16:17:52.570636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.045 [2024-07-15 16:17:52.570651] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:07.045 [2024-07-15 16:17:52.570670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:07.045 [2024-07-15 16:17:52.570671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.045 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:07.045 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.045 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.045 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.045 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:07.304 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.304 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.304 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.304 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:07.304 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.304 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.304 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.304 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:07.304 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.304 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.304 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.304 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:07.304 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.304 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.304 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.304 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:07.304 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.304 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.304 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.304 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:07.304 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.304 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:07.304 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.304 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.304 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:07.304 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.304 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.304 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.304 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:07.304 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.304 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.304 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.304 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:07.304 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.304 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:07.304 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.304 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.304 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:07.304 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.304 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.304 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.304 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:07.304 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.304 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.304 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.304 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:07.304 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.304 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.304 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.304 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:07.304 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.304 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.304 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.304 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:07.304 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.304 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.304 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.304 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:07.304 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.304 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.304 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.304 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:07.304 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.304 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.304 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.304 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:07.304 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.304 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.304 16:17:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.240 16:17:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:08.240 16:17:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.240 16:17:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.240 16:17:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.240 16:17:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:08.240 16:17:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.240 16:17:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.240 16:17:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.240 16:17:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:08.240 16:17:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.240 16:17:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.240 16:17:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.240 16:17:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:08.240 16:17:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.240 16:17:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.240 16:17:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.240 16:17:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:08.240 16:17:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.240 16:17:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.240 16:17:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.240 16:17:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:08.240 16:17:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.240 16:17:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.240 16:17:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.240 16:17:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:08.240 16:17:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.240 16:17:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.240 16:17:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.240 16:17:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:08.240 16:17:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.240 16:17:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.240 16:17:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.240 16:17:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:08.240 16:17:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.240 16:17:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.240 16:17:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.240 16:17:53 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:08.240 16:17:53 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:08.240 16:17:53 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:08.240 00:07:08.240 real 0m1.395s 00:07:08.240 user 0m4.661s 00:07:08.240 sys 0m0.155s 00:07:08.240 16:17:53 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:08.240 16:17:53 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:08.240 ************************************ 00:07:08.240 END TEST accel_decomp_full_mcore 00:07:08.240 ************************************ 00:07:08.500 16:17:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:08.500 16:17:53 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:08.500 16:17:53 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:08.500 16:17:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.500 16:17:53 accel -- common/autotest_common.sh@10 -- # set +x 00:07:08.500 ************************************ 00:07:08.500 START TEST accel_decomp_mthread 00:07:08.500 ************************************ 00:07:08.500 16:17:53 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:08.500 16:17:53 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:08.500 16:17:53 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:08.500 16:17:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.500 16:17:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.500 16:17:53 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:08.500 16:17:53 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:08.500 16:17:53 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:08.500 16:17:53 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:08.500 16:17:53 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:08.500 16:17:53 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.500 16:17:53 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.500 16:17:53 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:08.500 16:17:53 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:08.500 16:17:53 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:08.500 [2024-07-15 16:17:53.898037] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:07:08.500 [2024-07-15 16:17:53.898118] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1513018 ] 00:07:08.500 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.500 [2024-07-15 16:17:53.972504] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.500 [2024-07-15 16:17:54.055376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.759 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:08.759 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.759 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.759 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.759 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:08.759 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.759 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.759 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.759 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:08.759 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.759 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.759 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.759 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:08.759 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.759 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.759 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.759 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:08.759 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.759 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.759 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.759 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:08.759 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.759 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.759 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.759 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:08.759 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.759 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:08.759 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.759 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.759 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:08.759 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.759 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.759 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.759 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:08.759 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.759 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.759 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.759 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:08.759 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.759 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:08.759 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.759 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.759 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:08.759 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.759 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.760 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.760 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:08.760 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.760 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.760 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.760 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:08.760 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.760 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.760 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.760 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:08.760 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.760 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.760 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.760 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:08.760 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.760 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.760 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.760 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:08.760 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.760 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.760 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.760 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:08.760 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.760 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.760 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.760 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:08.760 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.760 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.760 16:17:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.697 16:17:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:09.697 16:17:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.697 16:17:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.697 16:17:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.697 16:17:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:09.697 16:17:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.697 16:17:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.697 16:17:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.697 16:17:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:09.697 16:17:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.697 16:17:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.697 16:17:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.697 16:17:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:09.697 16:17:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.697 16:17:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.697 16:17:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.697 16:17:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:09.697 16:17:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.697 16:17:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.697 16:17:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.697 16:17:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:09.697 16:17:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.697 16:17:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.697 16:17:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.697 16:17:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:09.697 16:17:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.697 16:17:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.697 16:17:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.697 16:17:55 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:09.697 16:17:55 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:09.697 16:17:55 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.697 00:07:09.697 real 0m1.385s 00:07:09.697 user 0m1.255s 00:07:09.697 sys 0m0.144s 00:07:09.697 16:17:55 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.697 16:17:55 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:09.697 ************************************ 00:07:09.697 END TEST accel_decomp_mthread 00:07:09.697 ************************************ 00:07:09.956 16:17:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:09.956 16:17:55 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:09.956 16:17:55 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:09.956 16:17:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.956 16:17:55 accel -- common/autotest_common.sh@10 -- # set +x 00:07:09.956 ************************************ 00:07:09.956 START TEST accel_decomp_full_mthread 00:07:09.956 ************************************ 00:07:09.956 16:17:55 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:09.956 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:09.956 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:09.957 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:09.957 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.957 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.957 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:09.957 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:09.957 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:09.957 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:09.957 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.957 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.957 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:09.957 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:09.957 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:09.957 [2024-07-15 16:17:55.338919] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:07:09.957 [2024-07-15 16:17:55.338988] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1513255 ] 00:07:09.957 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.957 [2024-07-15 16:17:55.406645] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.957 [2024-07-15 16:17:55.489594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:10.216 16:17:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.153 16:17:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:11.153 16:17:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.153 16:17:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.153 16:17:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.153 16:17:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:11.153 16:17:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.153 16:17:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.153 16:17:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.153 16:17:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:11.153 16:17:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.153 16:17:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.153 16:17:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.153 16:17:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:11.153 16:17:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.153 16:17:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.153 16:17:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.153 16:17:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:11.153 16:17:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.153 16:17:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.153 16:17:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.153 16:17:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:11.153 16:17:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.153 16:17:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.153 16:17:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.153 16:17:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:11.153 16:17:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.153 16:17:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.153 16:17:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.153 16:17:56 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:11.153 16:17:56 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:11.153 16:17:56 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:11.153 00:07:11.153 real 0m1.392s 00:07:11.153 user 0m1.268s 00:07:11.153 sys 0m0.137s 00:07:11.153 16:17:56 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.153 16:17:56 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:11.153 ************************************ 00:07:11.153 END TEST accel_decomp_full_mthread 00:07:11.153 ************************************ 00:07:11.413 16:17:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:11.413 16:17:56 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:11.413 16:17:56 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:11.413 16:17:56 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:11.413 16:17:56 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:11.413 16:17:56 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:11.413 16:17:56 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.413 16:17:56 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.413 16:17:56 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:11.413 16:17:56 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:11.413 16:17:56 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:11.413 16:17:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.413 16:17:56 accel -- accel/accel.sh@41 -- # jq -r . 00:07:11.413 16:17:56 accel -- common/autotest_common.sh@10 -- # set +x 00:07:11.413 ************************************ 00:07:11.413 START TEST accel_dif_functional_tests 00:07:11.413 ************************************ 00:07:11.413 16:17:56 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:11.413 [2024-07-15 16:17:56.803475] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:07:11.413 [2024-07-15 16:17:56.803560] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1513462 ] 00:07:11.413 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.413 [2024-07-15 16:17:56.876911] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:11.413 [2024-07-15 16:17:56.961836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.413 [2024-07-15 16:17:56.961924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:11.413 [2024-07-15 16:17:56.961926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.672 00:07:11.672 00:07:11.672 CUnit - A unit testing framework for C - Version 2.1-3 00:07:11.672 http://cunit.sourceforge.net/ 00:07:11.672 00:07:11.672 00:07:11.672 Suite: accel_dif 00:07:11.672 Test: verify: DIF generated, GUARD check ...passed 00:07:11.672 Test: verify: DIF generated, APPTAG check ...passed 00:07:11.672 Test: verify: DIF generated, REFTAG check ...passed 00:07:11.672 Test: verify: DIF not generated, GUARD check ...[2024-07-15 16:17:57.034972] dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:11.672 passed 00:07:11.672 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 16:17:57.035029] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:11.672 passed 00:07:11.672 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 16:17:57.035070] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:11.672 passed 00:07:11.672 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:11.672 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 16:17:57.035119] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:11.672 passed 00:07:11.672 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:11.672 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:11.672 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:11.672 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 16:17:57.035218] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:11.672 passed 00:07:11.672 Test: verify copy: DIF generated, GUARD check ...passed 00:07:11.672 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:11.672 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:11.672 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 16:17:57.035330] dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:11.672 passed 00:07:11.672 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 16:17:57.035355] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:11.672 passed 00:07:11.673 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 16:17:57.035381] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:11.673 passed 00:07:11.673 Test: generate copy: DIF generated, GUARD check ...passed 00:07:11.673 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:11.673 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:11.673 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:11.673 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:11.673 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:11.673 Test: generate copy: iovecs-len validate ...[2024-07-15 16:17:57.035557] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:11.673 passed 00:07:11.673 Test: generate copy: buffer alignment validate ...passed 00:07:11.673 00:07:11.673 Run Summary: Type Total Ran Passed Failed Inactive 00:07:11.673 suites 1 1 n/a 0 0 00:07:11.673 tests 26 26 26 0 0 00:07:11.673 asserts 115 115 115 0 n/a 00:07:11.673 00:07:11.673 Elapsed time = 0.002 seconds 00:07:11.673 00:07:11.673 real 0m0.423s 00:07:11.673 user 0m0.578s 00:07:11.673 sys 0m0.167s 00:07:11.673 16:17:57 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.673 16:17:57 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:11.673 ************************************ 00:07:11.673 END TEST accel_dif_functional_tests 00:07:11.673 ************************************ 00:07:11.673 16:17:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:11.673 00:07:11.673 real 0m31.978s 00:07:11.673 user 0m34.985s 00:07:11.673 sys 0m5.135s 00:07:11.673 16:17:57 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.673 16:17:57 accel -- common/autotest_common.sh@10 -- # set +x 00:07:11.673 ************************************ 00:07:11.673 END TEST accel 00:07:11.673 ************************************ 00:07:11.932 16:17:57 -- common/autotest_common.sh@1142 -- # return 0 00:07:11.932 16:17:57 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:11.932 16:17:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:11.932 16:17:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.932 16:17:57 -- common/autotest_common.sh@10 -- # set +x 00:07:11.932 ************************************ 00:07:11.932 START TEST accel_rpc 00:07:11.932 ************************************ 00:07:11.932 16:17:57 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:11.932 * Looking for test storage... 00:07:11.932 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel 00:07:11.932 16:17:57 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:11.932 16:17:57 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1513528 00:07:11.932 16:17:57 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 1513528 00:07:11.932 16:17:57 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:11.932 16:17:57 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 1513528 ']' 00:07:11.932 16:17:57 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.932 16:17:57 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:11.932 16:17:57 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.932 16:17:57 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:11.932 16:17:57 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.932 [2024-07-15 16:17:57.439480] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:07:11.932 [2024-07-15 16:17:57.439560] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1513528 ] 00:07:11.932 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.192 [2024-07-15 16:17:57.514139] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.192 [2024-07-15 16:17:57.595895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.760 16:17:58 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:12.760 16:17:58 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:12.760 16:17:58 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:12.760 16:17:58 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:12.760 16:17:58 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:12.760 16:17:58 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:12.760 16:17:58 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:12.760 16:17:58 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:12.760 16:17:58 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.760 16:17:58 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.760 ************************************ 00:07:12.760 START TEST accel_assign_opcode 00:07:12.760 ************************************ 00:07:12.760 16:17:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:07:12.760 16:17:58 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:12.760 16:17:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:12.760 16:17:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:12.760 [2024-07-15 16:17:58.306044] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:12.760 16:17:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:12.760 16:17:58 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:12.760 16:17:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:12.760 16:17:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:12.760 [2024-07-15 16:17:58.318065] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:12.760 16:17:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:12.760 16:17:58 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:12.760 16:17:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:12.760 16:17:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:13.019 16:17:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.019 16:17:58 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:13.019 16:17:58 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:13.019 16:17:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.019 16:17:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:13.019 16:17:58 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:13.019 16:17:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.019 software 00:07:13.019 00:07:13.019 real 0m0.256s 00:07:13.019 user 0m0.042s 00:07:13.019 sys 0m0.019s 00:07:13.019 16:17:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.019 16:17:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:13.019 ************************************ 00:07:13.019 END TEST accel_assign_opcode 00:07:13.019 ************************************ 00:07:13.278 16:17:58 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:13.278 16:17:58 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 1513528 00:07:13.278 16:17:58 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 1513528 ']' 00:07:13.278 16:17:58 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 1513528 00:07:13.278 16:17:58 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:07:13.278 16:17:58 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:13.278 16:17:58 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1513528 00:07:13.278 16:17:58 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:13.278 16:17:58 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:13.278 16:17:58 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1513528' 00:07:13.278 killing process with pid 1513528 00:07:13.278 16:17:58 accel_rpc -- common/autotest_common.sh@967 -- # kill 1513528 00:07:13.278 16:17:58 accel_rpc -- common/autotest_common.sh@972 -- # wait 1513528 00:07:13.538 00:07:13.538 real 0m1.672s 00:07:13.538 user 0m1.702s 00:07:13.538 sys 0m0.496s 00:07:13.538 16:17:58 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.538 16:17:58 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.538 ************************************ 00:07:13.538 END TEST accel_rpc 00:07:13.538 ************************************ 00:07:13.538 16:17:59 -- common/autotest_common.sh@1142 -- # return 0 00:07:13.538 16:17:59 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:07:13.538 16:17:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:13.538 16:17:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.538 16:17:59 -- common/autotest_common.sh@10 -- # set +x 00:07:13.538 ************************************ 00:07:13.538 START TEST app_cmdline 00:07:13.538 ************************************ 00:07:13.538 16:17:59 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:07:13.798 * Looking for test storage... 00:07:13.798 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:13.798 16:17:59 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:13.798 16:17:59 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1513843 00:07:13.798 16:17:59 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1513843 00:07:13.798 16:17:59 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:13.798 16:17:59 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 1513843 ']' 00:07:13.798 16:17:59 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.798 16:17:59 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:13.798 16:17:59 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.798 16:17:59 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:13.798 16:17:59 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:13.798 [2024-07-15 16:17:59.206715] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:07:13.798 [2024-07-15 16:17:59.206791] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1513843 ] 00:07:13.798 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.798 [2024-07-15 16:17:59.283288] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.798 [2024-07-15 16:17:59.365467] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.736 16:18:00 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:14.736 16:18:00 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:07:14.736 16:18:00 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:14.736 { 00:07:14.736 "version": "SPDK v24.09-pre git sha1 24034319f", 00:07:14.736 "fields": { 00:07:14.736 "major": 24, 00:07:14.736 "minor": 9, 00:07:14.736 "patch": 0, 00:07:14.736 "suffix": "-pre", 00:07:14.736 "commit": "24034319f" 00:07:14.736 } 00:07:14.736 } 00:07:14.736 16:18:00 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:14.736 16:18:00 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:14.736 16:18:00 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:14.736 16:18:00 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:14.736 16:18:00 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:14.737 16:18:00 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:14.737 16:18:00 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:14.737 16:18:00 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.737 16:18:00 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:14.737 16:18:00 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.737 16:18:00 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:14.737 16:18:00 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:14.737 16:18:00 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:14.737 16:18:00 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:14.737 16:18:00 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:14.737 16:18:00 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:14.737 16:18:00 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:14.737 16:18:00 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:14.737 16:18:00 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:14.737 16:18:00 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:14.737 16:18:00 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:14.737 16:18:00 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:14.737 16:18:00 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py ]] 00:07:14.737 16:18:00 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:14.996 request: 00:07:14.996 { 00:07:14.996 "method": "env_dpdk_get_mem_stats", 00:07:14.996 "req_id": 1 00:07:14.996 } 00:07:14.996 Got JSON-RPC error response 00:07:14.996 response: 00:07:14.996 { 00:07:14.996 "code": -32601, 00:07:14.996 "message": "Method not found" 00:07:14.996 } 00:07:14.996 16:18:00 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:14.996 16:18:00 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:14.996 16:18:00 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:14.996 16:18:00 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:14.996 16:18:00 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1513843 00:07:14.996 16:18:00 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 1513843 ']' 00:07:14.996 16:18:00 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 1513843 00:07:14.996 16:18:00 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:07:14.996 16:18:00 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:14.996 16:18:00 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1513843 00:07:14.996 16:18:00 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:14.996 16:18:00 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:14.996 16:18:00 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1513843' 00:07:14.996 killing process with pid 1513843 00:07:14.996 16:18:00 app_cmdline -- common/autotest_common.sh@967 -- # kill 1513843 00:07:14.996 16:18:00 app_cmdline -- common/autotest_common.sh@972 -- # wait 1513843 00:07:15.255 00:07:15.255 real 0m1.734s 00:07:15.255 user 0m2.010s 00:07:15.255 sys 0m0.517s 00:07:15.255 16:18:00 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:15.255 16:18:00 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:15.255 ************************************ 00:07:15.255 END TEST app_cmdline 00:07:15.255 ************************************ 00:07:15.514 16:18:00 -- common/autotest_common.sh@1142 -- # return 0 00:07:15.514 16:18:00 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:07:15.514 16:18:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:15.514 16:18:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.514 16:18:00 -- common/autotest_common.sh@10 -- # set +x 00:07:15.514 ************************************ 00:07:15.514 START TEST version 00:07:15.514 ************************************ 00:07:15.514 16:18:00 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:07:15.514 * Looking for test storage... 00:07:15.514 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:15.514 16:18:00 version -- app/version.sh@17 -- # get_header_version major 00:07:15.515 16:18:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:07:15.515 16:18:00 version -- app/version.sh@14 -- # cut -f2 00:07:15.515 16:18:00 version -- app/version.sh@14 -- # tr -d '"' 00:07:15.515 16:18:00 version -- app/version.sh@17 -- # major=24 00:07:15.515 16:18:00 version -- app/version.sh@18 -- # get_header_version minor 00:07:15.515 16:18:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:07:15.515 16:18:00 version -- app/version.sh@14 -- # cut -f2 00:07:15.515 16:18:00 version -- app/version.sh@14 -- # tr -d '"' 00:07:15.515 16:18:00 version -- app/version.sh@18 -- # minor=9 00:07:15.515 16:18:00 version -- app/version.sh@19 -- # get_header_version patch 00:07:15.515 16:18:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:07:15.515 16:18:00 version -- app/version.sh@14 -- # cut -f2 00:07:15.515 16:18:00 version -- app/version.sh@14 -- # tr -d '"' 00:07:15.515 16:18:00 version -- app/version.sh@19 -- # patch=0 00:07:15.515 16:18:00 version -- app/version.sh@20 -- # get_header_version suffix 00:07:15.515 16:18:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:07:15.515 16:18:00 version -- app/version.sh@14 -- # cut -f2 00:07:15.515 16:18:00 version -- app/version.sh@14 -- # tr -d '"' 00:07:15.515 16:18:01 version -- app/version.sh@20 -- # suffix=-pre 00:07:15.515 16:18:01 version -- app/version.sh@22 -- # version=24.9 00:07:15.515 16:18:01 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:15.515 16:18:01 version -- app/version.sh@28 -- # version=24.9rc0 00:07:15.515 16:18:01 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:15.515 16:18:01 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:15.515 16:18:01 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:15.515 16:18:01 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:15.515 00:07:15.515 real 0m0.168s 00:07:15.515 user 0m0.081s 00:07:15.515 sys 0m0.125s 00:07:15.515 16:18:01 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:15.515 16:18:01 version -- common/autotest_common.sh@10 -- # set +x 00:07:15.515 ************************************ 00:07:15.515 END TEST version 00:07:15.515 ************************************ 00:07:15.515 16:18:01 -- common/autotest_common.sh@1142 -- # return 0 00:07:15.515 16:18:01 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:15.515 16:18:01 -- spdk/autotest.sh@198 -- # uname -s 00:07:15.515 16:18:01 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:15.515 16:18:01 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:15.515 16:18:01 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:15.515 16:18:01 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:15.515 16:18:01 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:15.515 16:18:01 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:15.515 16:18:01 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:15.515 16:18:01 -- common/autotest_common.sh@10 -- # set +x 00:07:15.774 16:18:01 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:15.774 16:18:01 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:15.774 16:18:01 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:07:15.774 16:18:01 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:07:15.774 16:18:01 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:07:15.774 16:18:01 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:07:15.774 16:18:01 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:07:15.774 16:18:01 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:07:15.774 16:18:01 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:07:15.774 16:18:01 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:07:15.774 16:18:01 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:07:15.774 16:18:01 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:07:15.774 16:18:01 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:07:15.774 16:18:01 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:07:15.774 16:18:01 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:07:15.774 16:18:01 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:07:15.774 16:18:01 -- spdk/autotest.sh@371 -- # [[ 1 -eq 1 ]] 00:07:15.774 16:18:01 -- spdk/autotest.sh@372 -- # run_test llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:07:15.774 16:18:01 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:15.774 16:18:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.774 16:18:01 -- common/autotest_common.sh@10 -- # set +x 00:07:15.774 ************************************ 00:07:15.774 START TEST llvm_fuzz 00:07:15.774 ************************************ 00:07:15.774 16:18:01 llvm_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:07:15.774 * Looking for test storage... 00:07:15.774 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz 00:07:15.774 16:18:01 llvm_fuzz -- fuzz/llvm.sh@11 -- # fuzzers=($(get_fuzzer_targets)) 00:07:15.774 16:18:01 llvm_fuzz -- fuzz/llvm.sh@11 -- # get_fuzzer_targets 00:07:15.774 16:18:01 llvm_fuzz -- common/autotest_common.sh@546 -- # fuzzers=() 00:07:15.774 16:18:01 llvm_fuzz -- common/autotest_common.sh@546 -- # local fuzzers 00:07:15.774 16:18:01 llvm_fuzz -- common/autotest_common.sh@548 -- # [[ -n '' ]] 00:07:15.774 16:18:01 llvm_fuzz -- common/autotest_common.sh@551 -- # fuzzers=("$rootdir/test/fuzz/llvm/"*) 00:07:15.774 16:18:01 llvm_fuzz -- common/autotest_common.sh@552 -- # fuzzers=("${fuzzers[@]##*/}") 00:07:15.774 16:18:01 llvm_fuzz -- common/autotest_common.sh@555 -- # echo 'common.sh llvm-gcov.sh nvmf vfio' 00:07:15.774 16:18:01 llvm_fuzz -- fuzz/llvm.sh@13 -- # llvm_out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:07:15.774 16:18:01 llvm_fuzz -- fuzz/llvm.sh@15 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/coverage 00:07:15.774 16:18:01 llvm_fuzz -- fuzz/llvm.sh@56 -- # [[ 1 -eq 0 ]] 00:07:15.774 16:18:01 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:07:15.774 16:18:01 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:07:15.774 16:18:01 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:07:15.774 16:18:01 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:07:15.774 16:18:01 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:07:15.774 16:18:01 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:07:15.775 16:18:01 llvm_fuzz -- fuzz/llvm.sh@62 -- # run_test nvmf_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:07:15.775 16:18:01 llvm_fuzz -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:15.775 16:18:01 llvm_fuzz -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.775 16:18:01 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:15.775 ************************************ 00:07:15.775 START TEST nvmf_llvm_fuzz 00:07:15.775 ************************************ 00:07:15.775 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:07:16.036 * Looking for test storage... 00:07:16.036 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:16.036 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@60 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:07:16.036 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:07:16.036 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:16.036 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:07:16.036 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:16.036 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:16.036 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:16.036 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:07:16.036 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=/usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER=y 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:07:16.037 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:16.037 #define SPDK_CONFIG_H 00:07:16.037 #define SPDK_CONFIG_APPS 1 00:07:16.037 #define SPDK_CONFIG_ARCH native 00:07:16.037 #undef SPDK_CONFIG_ASAN 00:07:16.037 #undef SPDK_CONFIG_AVAHI 00:07:16.037 #undef SPDK_CONFIG_CET 00:07:16.037 #define SPDK_CONFIG_COVERAGE 1 00:07:16.037 #define SPDK_CONFIG_CROSS_PREFIX 00:07:16.037 #undef SPDK_CONFIG_CRYPTO 00:07:16.037 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:16.037 #undef SPDK_CONFIG_CUSTOMOCF 00:07:16.037 #undef SPDK_CONFIG_DAOS 00:07:16.037 #define SPDK_CONFIG_DAOS_DIR 00:07:16.037 #define SPDK_CONFIG_DEBUG 1 00:07:16.037 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:16.037 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:16.038 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:16.038 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:16.038 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:16.038 #undef SPDK_CONFIG_DPDK_UADK 00:07:16.038 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:16.038 #define SPDK_CONFIG_EXAMPLES 1 00:07:16.038 #undef SPDK_CONFIG_FC 00:07:16.038 #define SPDK_CONFIG_FC_PATH 00:07:16.038 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:16.038 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:16.038 #undef SPDK_CONFIG_FUSE 00:07:16.038 #define SPDK_CONFIG_FUZZER 1 00:07:16.038 #define SPDK_CONFIG_FUZZER_LIB /usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:07:16.038 #undef SPDK_CONFIG_GOLANG 00:07:16.038 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:16.038 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:16.038 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:16.038 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:16.038 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:16.038 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:16.038 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:16.038 #define SPDK_CONFIG_IDXD 1 00:07:16.038 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:16.038 #undef SPDK_CONFIG_IPSEC_MB 00:07:16.038 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:16.038 #define SPDK_CONFIG_ISAL 1 00:07:16.038 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:16.038 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:16.038 #define SPDK_CONFIG_LIBDIR 00:07:16.038 #undef SPDK_CONFIG_LTO 00:07:16.038 #define SPDK_CONFIG_MAX_LCORES 128 00:07:16.038 #define SPDK_CONFIG_NVME_CUSE 1 00:07:16.038 #undef SPDK_CONFIG_OCF 00:07:16.038 #define SPDK_CONFIG_OCF_PATH 00:07:16.038 #define SPDK_CONFIG_OPENSSL_PATH 00:07:16.038 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:16.038 #define SPDK_CONFIG_PGO_DIR 00:07:16.038 #undef SPDK_CONFIG_PGO_USE 00:07:16.038 #define SPDK_CONFIG_PREFIX /usr/local 00:07:16.038 #undef SPDK_CONFIG_RAID5F 00:07:16.038 #undef SPDK_CONFIG_RBD 00:07:16.038 #define SPDK_CONFIG_RDMA 1 00:07:16.038 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:16.038 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:16.038 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:16.038 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:16.038 #undef SPDK_CONFIG_SHARED 00:07:16.038 #undef SPDK_CONFIG_SMA 00:07:16.038 #define SPDK_CONFIG_TESTS 1 00:07:16.038 #undef SPDK_CONFIG_TSAN 00:07:16.038 #define SPDK_CONFIG_UBLK 1 00:07:16.038 #define SPDK_CONFIG_UBSAN 1 00:07:16.038 #undef SPDK_CONFIG_UNIT_TESTS 00:07:16.038 #undef SPDK_CONFIG_URING 00:07:16.038 #define SPDK_CONFIG_URING_PATH 00:07:16.038 #undef SPDK_CONFIG_URING_ZNS 00:07:16.038 #undef SPDK_CONFIG_USDT 00:07:16.038 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:16.038 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:16.038 #define SPDK_CONFIG_VFIO_USER 1 00:07:16.038 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:16.038 #define SPDK_CONFIG_VHOST 1 00:07:16.038 #define SPDK_CONFIG_VIRTIO 1 00:07:16.038 #undef SPDK_CONFIG_VTUNE 00:07:16.038 #define SPDK_CONFIG_VTUNE_DIR 00:07:16.038 #define SPDK_CONFIG_WERROR 1 00:07:16.038 #define SPDK_CONFIG_WPDK_DIR 00:07:16.038 #undef SPDK_CONFIG_XNVME 00:07:16.038 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:16.038 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:16.038 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:07:16.038 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:16.038 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:16.038 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:16.038 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.038 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.038 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.038 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:07:16.038 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.038 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:16.038 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:16.038 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:16.038 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:16.038 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:16.038 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:16.038 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:07:16.038 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:07:16.038 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:07:16.038 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # uname -s 00:07:16.038 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:07:16.038 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:16.038 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:16.038 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:16.038 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:16.038 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:16.038 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:16.038 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:07:16.038 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:16.038 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:16.038 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:16.038 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:16.038 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:16.038 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:16.038 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:16.038 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:16.038 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:07:16.038 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:07:16.038 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:16.038 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:07:16.038 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:16.038 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:07:16.038 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:16.038 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:07:16.038 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@122 -- # : 1 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@124 -- # : 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@126 -- # : 0 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@138 -- # : 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@140 -- # : true 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@142 -- # : 0 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@167 -- # : 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@169 -- # : 0 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:16.039 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@200 -- # cat 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@263 -- # export valgrind= 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@263 -- # valgrind= 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@269 -- # uname -s 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@279 -- # MAKE=make 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j72 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@318 -- # [[ -z 1514386 ]] 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@318 -- # kill -0 1514386 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.Ofu20Y 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf /tmp/spdk.Ofu20Y/tests/nvmf /tmp/spdk.Ofu20Y 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@327 -- # df -T 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=893108224 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4391321600 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=86983942144 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=94508576768 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=7524634624 00:07:16.040 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=47198650368 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=47254286336 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=55635968 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=18895826944 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=18901716992 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=5890048 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=47253729280 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=47254290432 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=561152 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=9450852352 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=9450856448 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:16.041 * Looking for test storage... 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mount=/ 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # target_space=86983942144 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@381 -- # new_size=9739227136 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:16.041 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@389 -- # return 0 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1682 -- # set -o errtrace 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1687 -- # true 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1689 -- # xtrace_fd 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@61 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/../common.sh 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@63 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # fuzz_num=25 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@65 -- # (( fuzz_num != 0 )) 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@67 -- # trap 'cleanup /tmp/llvm_fuzz* /var/tmp/suppress_nvmf_fuzz; exit 1' SIGINT SIGTERM EXIT 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@69 -- # mem_size=512 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@70 -- # [[ 1 -eq 1 ]] 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@71 -- # start_llvm_fuzz_short 25 1 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=25 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=0 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_0.conf 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 0 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4400 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4400"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:16.041 16:18:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' -c /tmp/fuzz_json_0.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 -Z 0 00:07:16.300 [2024-07-15 16:18:01.625882] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:07:16.300 [2024-07-15 16:18:01.625965] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1514438 ] 00:07:16.300 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.300 [2024-07-15 16:18:01.828196] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.559 [2024-07-15 16:18:01.904910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.559 [2024-07-15 16:18:01.965039] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:16.559 [2024-07-15 16:18:01.981272] tcp.c: 993:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4400 *** 00:07:16.559 INFO: Running with entropic power schedule (0xFF, 100). 00:07:16.559 INFO: Seed: 3623511438 00:07:16.559 INFO: Loaded 1 modules (357850 inline 8-bit counters): 357850 [0x29ab30c, 0x2a028e6), 00:07:16.559 INFO: Loaded 1 PC tables (357850 PCs): 357850 [0x2a028e8,0x2f78688), 00:07:16.559 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:07:16.559 INFO: A corpus is not provided, starting from an empty corpus 00:07:16.559 #2 INITED exec/s: 0 rss: 65Mb 00:07:16.559 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:16.559 This may also happen if the target rejected all inputs we tried so far 00:07:16.559 [2024-07-15 16:18:02.051842] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:4b4b4b4b SGL TRANSPORT DATA BLOCK TRANSPORT 0x4b4b4b4b4b4b4b4b 00:07:16.559 [2024-07-15 16:18:02.051886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.818 NEW_FUNC[1/697]: 0x483e80 in fuzz_admin_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:47 00:07:16.818 NEW_FUNC[2/697]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:16.818 #38 NEW cov: 11865 ft: 11859 corp: 2/125b lim: 320 exec/s: 0 rss: 72Mb L: 124/124 MS: 1 InsertRepeatedBytes- 00:07:17.077 [2024-07-15 16:18:02.402467] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:4b4b4b4b SGL TRANSPORT DATA BLOCK TRANSPORT 0x4b4b4b4b4b4b4b4b 00:07:17.077 [2024-07-15 16:18:02.402517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.077 #39 NEW cov: 11995 ft: 12507 corp: 3/249b lim: 320 exec/s: 0 rss: 72Mb L: 124/124 MS: 1 ChangeByte- 00:07:17.077 [2024-07-15 16:18:02.462837] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:4b4b4b4b SGL TRANSPORT DATA BLOCK TRANSPORT 0x4b4b4b4b4b4b4b4b 00:07:17.077 [2024-07-15 16:18:02.462867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.077 #40 NEW cov: 12001 ft: 12622 corp: 4/317b lim: 320 exec/s: 0 rss: 72Mb L: 68/124 MS: 1 EraseBytes- 00:07:17.077 [2024-07-15 16:18:02.523261] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:4b4b4b4b SGL TRANSPORT DATA BLOCK TRANSPORT 0x4b4b4b4b4b4b4b4b 00:07:17.077 [2024-07-15 16:18:02.523286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.077 #41 NEW cov: 12086 ft: 12926 corp: 5/385b lim: 320 exec/s: 0 rss: 72Mb L: 68/124 MS: 1 ChangeBinInt- 00:07:17.077 [2024-07-15 16:18:02.583723] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:4b4b4b4b SGL TRANSPORT DATA BLOCK TRANSPORT 0x4b4b4b4b4b4b4b4b 00:07:17.077 [2024-07-15 16:18:02.583750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.077 [2024-07-15 16:18:02.583838] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (6c) qid:0 cid:5 nsid:6c6c6c6c cdw10:6c6c6c6c cdw11:4b6c6c6c 00:07:17.077 [2024-07-15 16:18:02.583855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.077 #47 NEW cov: 12109 ft: 13211 corp: 6/516b lim: 320 exec/s: 0 rss: 72Mb L: 131/131 MS: 1 InsertRepeatedBytes- 00:07:17.077 [2024-07-15 16:18:02.633830] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:4b4b4b4b SGL TRANSPORT DATA BLOCK TRANSPORT 0x4b4b4b4b4b4b4b4b 00:07:17.077 [2024-07-15 16:18:02.633859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.077 #48 NEW cov: 12109 ft: 13282 corp: 7/641b lim: 320 exec/s: 0 rss: 72Mb L: 125/131 MS: 1 InsertByte- 00:07:17.337 [2024-07-15 16:18:02.684242] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:6c6c6c6c SGL TRANSPORT DATA BLOCK TRANSPORT 0x4b4b4b4b4b4b4b4b 00:07:17.337 [2024-07-15 16:18:02.684269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.337 #49 NEW cov: 12109 ft: 13331 corp: 8/748b lim: 320 exec/s: 0 rss: 72Mb L: 107/131 MS: 1 EraseBytes- 00:07:17.337 [2024-07-15 16:18:02.744817] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:4b4b4b4b SGL TRANSPORT DATA BLOCK TRANSPORT 0x4b4b4b4b4b4b4b4b 00:07:17.337 [2024-07-15 16:18:02.744844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.337 #50 NEW cov: 12109 ft: 13383 corp: 9/846b lim: 320 exec/s: 0 rss: 72Mb L: 98/131 MS: 1 EraseBytes- 00:07:17.337 [2024-07-15 16:18:02.795060] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:6c6c6c6c SGL TRANSPORT DATA BLOCK TRANSPORT 0x4b4b4b4b4b4b4b4b 00:07:17.337 [2024-07-15 16:18:02.795088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.337 #51 NEW cov: 12109 ft: 13443 corp: 10/953b lim: 320 exec/s: 0 rss: 72Mb L: 107/131 MS: 1 ChangeByte- 00:07:17.337 [2024-07-15 16:18:02.855484] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:4b4b4b4b SGL TRANSPORT DATA BLOCK TRANSPORT 0x4b4b4b4b4b4b4b4b 00:07:17.337 [2024-07-15 16:18:02.855511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.337 #52 NEW cov: 12109 ft: 13504 corp: 11/1077b lim: 320 exec/s: 0 rss: 72Mb L: 124/131 MS: 1 ShuffleBytes- 00:07:17.337 [2024-07-15 16:18:02.905943] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:4b4b4b4b SGL TRANSPORT DATA BLOCK TRANSPORT 0x4b4b4b4b4b4b4b4b 00:07:17.337 [2024-07-15 16:18:02.905971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.595 NEW_FUNC[1/1]: 0x1a7eaf0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:17.595 #58 NEW cov: 12132 ft: 13553 corp: 12/1201b lim: 320 exec/s: 0 rss: 72Mb L: 124/131 MS: 1 ChangeBinInt- 00:07:17.595 [2024-07-15 16:18:02.956523] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:6c6c6c6c SGL TRANSPORT DATA BLOCK TRANSPORT 0xb4b4b54b4b4b4b4b 00:07:17.595 [2024-07-15 16:18:02.956553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.595 #59 NEW cov: 12132 ft: 13599 corp: 13/1308b lim: 320 exec/s: 0 rss: 72Mb L: 107/131 MS: 1 ChangeBinInt- 00:07:17.595 [2024-07-15 16:18:03.006751] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:d56c6c6c SGL TRANSPORT DATA BLOCK TRANSPORT 0xb4b4b54b4b4b4b4b 00:07:17.595 [2024-07-15 16:18:03.006778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.595 #65 NEW cov: 12132 ft: 13618 corp: 14/1415b lim: 320 exec/s: 65 rss: 72Mb L: 107/131 MS: 1 ChangeByte- 00:07:17.595 [2024-07-15 16:18:03.067348] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:6c6c6c6c SGL TRANSPORT DATA BLOCK TRANSPORT 0x4b4b4b4b4b4b4b4b 00:07:17.595 [2024-07-15 16:18:03.067375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.595 [2024-07-15 16:18:03.067476] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (c0) qid:0 cid:5 nsid:c0c0c0c0 cdw10:c0c0c0c0 cdw11:c0c0c0c0 00:07:17.595 [2024-07-15 16:18:03.067498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.595 [2024-07-15 16:18:03.067592] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (c0) qid:0 cid:6 nsid:c0c0c0c0 cdw10:c0c0c0c0 cdw11:6c6cc0c0 00:07:17.595 [2024-07-15 16:18:03.067609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.595 #66 NEW cov: 12132 ft: 13812 corp: 15/1646b lim: 320 exec/s: 66 rss: 72Mb L: 231/231 MS: 1 InsertRepeatedBytes- 00:07:17.595 [2024-07-15 16:18:03.117092] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:d56c6c6c SGL TRANSPORT DATA BLOCK TRANSPORT 0xb4b4b54b4b4b4b4b 00:07:17.595 [2024-07-15 16:18:03.117120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.595 #67 NEW cov: 12132 ft: 13859 corp: 16/1753b lim: 320 exec/s: 67 rss: 73Mb L: 107/231 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\000"- 00:07:17.854 [2024-07-15 16:18:03.187419] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:6c6c6c6c SGL TRANSPORT DATA BLOCK TRANSPORT 0x4b4b4b4b4b4b4b4b 00:07:17.854 [2024-07-15 16:18:03.187449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.854 #68 NEW cov: 12132 ft: 13870 corp: 17/1860b lim: 320 exec/s: 68 rss: 73Mb L: 107/231 MS: 1 ChangeBit- 00:07:17.854 [2024-07-15 16:18:03.237518] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:6c6c6c6c SGL TRANSPORT DATA BLOCK TRANSPORT 0x4b4b4b4b4b4b4b4b 00:07:17.854 [2024-07-15 16:18:03.237553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.854 #69 NEW cov: 12132 ft: 13886 corp: 18/1967b lim: 320 exec/s: 69 rss: 73Mb L: 107/231 MS: 1 ChangeByte- 00:07:17.854 [2024-07-15 16:18:03.307819] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:4b4b4b4b SGL TRANSPORT DATA BLOCK TRANSPORT 0x4b4b4b4b4b4b4b4b 00:07:17.854 [2024-07-15 16:18:03.307850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.854 #70 NEW cov: 12132 ft: 13933 corp: 19/2036b lim: 320 exec/s: 70 rss: 73Mb L: 69/231 MS: 1 EraseBytes- 00:07:17.854 [2024-07-15 16:18:03.358051] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x4b4b4b4b4b4b4b4b 00:07:17.854 [2024-07-15 16:18:03.358083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.854 #71 NEW cov: 12132 ft: 14013 corp: 20/2160b lim: 320 exec/s: 71 rss: 73Mb L: 124/231 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:07:17.854 [2024-07-15 16:18:03.429001] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:6c6c6c6c SGL TRANSPORT DATA BLOCK TRANSPORT 0x4b4b4b4b4b4b4b4b 00:07:17.854 [2024-07-15 16:18:03.429030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.854 [2024-07-15 16:18:03.429114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (c0) qid:0 cid:5 nsid:c0c0c0c0 cdw10:c0c0c0c0 cdw11:c0c0c0c0 00:07:17.854 [2024-07-15 16:18:03.429129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.854 [2024-07-15 16:18:03.429222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (c0) qid:0 cid:6 nsid:c0c0c0c0 cdw10:c0c0c0c0 cdw11:6c6cc0c0 00:07:17.854 [2024-07-15 16:18:03.429239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.113 #72 NEW cov: 12132 ft: 14055 corp: 21/2391b lim: 320 exec/s: 72 rss: 73Mb L: 231/231 MS: 1 ChangeByte- 00:07:18.113 [2024-07-15 16:18:03.488436] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:4b4b4b4b SGL TRANSPORT DATA BLOCK TRANSPORT 0x4b4b4b4b4b4b4b4b 00:07:18.113 [2024-07-15 16:18:03.488464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.113 #73 NEW cov: 12132 ft: 14074 corp: 22/2490b lim: 320 exec/s: 73 rss: 73Mb L: 99/231 MS: 1 InsertByte- 00:07:18.113 [2024-07-15 16:18:03.549391] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:6c6c6c6c SGL TRANSPORT DATA BLOCK TRANSPORT 0x4b4b4b4b4b4b4b4b 00:07:18.113 [2024-07-15 16:18:03.549417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.113 [2024-07-15 16:18:03.549510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (c0) qid:0 cid:5 nsid:c0c0c0c0 cdw10:c0c0c0c0 cdw11:c0c0c0c0 00:07:18.113 [2024-07-15 16:18:03.549525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.113 [2024-07-15 16:18:03.549622] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (c0) qid:0 cid:6 nsid:c0c0c0c0 cdw10:c0c0c0c0 cdw11:6c6cc0c0 00:07:18.113 [2024-07-15 16:18:03.549638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.113 #74 NEW cov: 12132 ft: 14081 corp: 23/2721b lim: 320 exec/s: 74 rss: 73Mb L: 231/231 MS: 1 ChangeBinInt- 00:07:18.113 [2024-07-15 16:18:03.609693] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:6c6c6c6c SGL TRANSPORT DATA BLOCK TRANSPORT 0x4b4b4b4b4b4b4b4b 00:07:18.113 [2024-07-15 16:18:03.609719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.113 [2024-07-15 16:18:03.609811] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (c0) qid:0 cid:5 nsid:c0c0c0c0 cdw10:c0c0c0c0 cdw11:c0c0c0c0 00:07:18.113 [2024-07-15 16:18:03.609829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.113 [2024-07-15 16:18:03.609920] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (c0) qid:0 cid:6 nsid:c0c0c0c0 cdw10:c0c0c0c0 cdw11:6cc0c0c0 00:07:18.114 [2024-07-15 16:18:03.609935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.114 #75 NEW cov: 12132 ft: 14113 corp: 24/2953b lim: 320 exec/s: 75 rss: 73Mb L: 232/232 MS: 1 InsertByte- 00:07:18.114 [2024-07-15 16:18:03.659350] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:4b4b4b4b SGL TRANSPORT DATA BLOCK TRANSPORT 0x4b4b4b4b00000000 00:07:18.114 [2024-07-15 16:18:03.659377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.373 #76 NEW cov: 12132 ft: 14196 corp: 25/3052b lim: 320 exec/s: 76 rss: 73Mb L: 99/232 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:07:18.373 [2024-07-15 16:18:03.719839] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (fa) qid:0 cid:4 nsid:fafafafa cdw10:fafafafa cdw11:fafafafa SGL TRANSPORT DATA BLOCK TRANSPORT 0xfafafafafafafafa 00:07:18.373 [2024-07-15 16:18:03.719865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.373 #78 NEW cov: 12132 ft: 14266 corp: 26/3117b lim: 320 exec/s: 78 rss: 73Mb L: 65/232 MS: 2 ChangeBit-InsertRepeatedBytes- 00:07:18.373 [2024-07-15 16:18:03.770067] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:6c6c6c6c SGL TRANSPORT DATA BLOCK TRANSPORT 0x4b4b4b4b4b4b4b4b 00:07:18.373 [2024-07-15 16:18:03.770094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.373 #79 NEW cov: 12132 ft: 14270 corp: 27/3224b lim: 320 exec/s: 79 rss: 73Mb L: 107/232 MS: 1 ChangeBinInt- 00:07:18.373 [2024-07-15 16:18:03.830671] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:4b4b4b4b SGL TRANSPORT DATA BLOCK TRANSPORT 0x4b4b4b4b4b4b4b4b 00:07:18.373 [2024-07-15 16:18:03.830701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.373 #80 NEW cov: 12132 ft: 14279 corp: 28/3348b lim: 320 exec/s: 80 rss: 73Mb L: 124/232 MS: 1 ChangeBinInt- 00:07:18.373 [2024-07-15 16:18:03.891552] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:4b4b4b4b SGL TRANSPORT DATA BLOCK TRANSPORT 0x3c29f8a626ff4b4b 00:07:18.373 [2024-07-15 16:18:03.891579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.373 [2024-07-15 16:18:03.891681] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (c0) qid:0 cid:5 nsid:c0c0c0c0 cdw10:c0c0c0c0 cdw11:c0c0c0c0 00:07:18.373 [2024-07-15 16:18:03.891697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.373 [2024-07-15 16:18:03.891788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (c0) qid:0 cid:6 nsid:c0c0c0c0 cdw10:c0c0c0c0 cdw11:c0c0c0c0 00:07:18.373 [2024-07-15 16:18:03.891804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.373 #81 NEW cov: 12132 ft: 14301 corp: 29/3588b lim: 320 exec/s: 81 rss: 73Mb L: 240/240 MS: 1 CMP- DE: "\377&\246\370) buf size (4096) 00:07:19.151 [2024-07-15 16:18:04.640414] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:29490000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.151 [2024-07-15 16:18:04.640443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.410 NEW_FUNC[1/698]: 0x484780 in fuzz_admin_get_log_page_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:67 00:07:19.410 NEW_FUNC[2/698]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:19.410 #14 NEW cov: 11965 ft: 11955 corp: 2/10b lim: 30 exec/s: 0 rss: 71Mb L: 9/9 MS: 2 ChangeByte-CMP- DE: "I\000\000\000\000\000\000\000"- 00:07:19.669 [2024-07-15 16:18:04.991181] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (42280) > buf size (4096) 00:07:19.669 [2024-07-15 16:18:04.991439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:29490000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.669 [2024-07-15 16:18:04.991482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.669 #15 NEW cov: 12095 ft: 12487 corp: 3/19b lim: 30 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 PersAutoDict- DE: "I\000\000\000\000\000\000\000"- 00:07:19.669 [2024-07-15 16:18:05.051133] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xff 00:07:19.669 [2024-07-15 16:18:05.051356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:29490000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.669 [2024-07-15 16:18:05.051383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.669 #16 NEW cov: 12107 ft: 12780 corp: 4/28b lim: 30 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 ChangeBinInt- 00:07:19.669 [2024-07-15 16:18:05.101260] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (785412) > buf size (4096) 00:07:19.669 [2024-07-15 16:18:05.101476] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ff000227 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.669 [2024-07-15 16:18:05.101501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.669 #20 NEW cov: 12192 ft: 13109 corp: 5/38b lim: 30 exec/s: 0 rss: 72Mb L: 10/10 MS: 4 CopyPart-ChangeByte-ChangeByte-CMP- DE: "\000'\246\370\320\225\205\324"- 00:07:19.669 [2024-07-15 16:18:05.141403] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (74756) > buf size (4096) 00:07:19.669 [2024-07-15 16:18:05.141625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:49000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.669 [2024-07-15 16:18:05.141650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.669 #21 NEW cov: 12192 ft: 13271 corp: 6/47b lim: 30 exec/s: 0 rss: 72Mb L: 9/10 MS: 1 PersAutoDict- DE: "I\000\000\000\000\000\000\000"- 00:07:19.669 [2024-07-15 16:18:05.181626] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (74756) > buf size (4096) 00:07:19.669 [2024-07-15 16:18:05.181764] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300002727 00:07:19.669 [2024-07-15 16:18:05.181875] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300002727 00:07:19.669 [2024-07-15 16:18:05.182090] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:49000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.669 [2024-07-15 16:18:05.182116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.669 [2024-07-15 16:18:05.182176] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:27278327 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.669 [2024-07-15 16:18:05.182192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.669 [2024-07-15 16:18:05.182252] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:27278327 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.669 [2024-07-15 16:18:05.182267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:19.669 #27 NEW cov: 12192 ft: 13775 corp: 7/70b lim: 30 exec/s: 0 rss: 72Mb L: 23/23 MS: 1 InsertRepeatedBytes- 00:07:19.669 [2024-07-15 16:18:05.231749] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (74756) > buf size (4096) 00:07:19.669 [2024-07-15 16:18:05.231886] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300002722 00:07:19.669 [2024-07-15 16:18:05.231996] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300002727 00:07:19.669 [2024-07-15 16:18:05.232229] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:49000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.669 [2024-07-15 16:18:05.232255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.669 [2024-07-15 16:18:05.232314] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:27278327 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.669 [2024-07-15 16:18:05.232329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.669 [2024-07-15 16:18:05.232385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:27278327 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.669 [2024-07-15 16:18:05.232399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:19.929 #28 NEW cov: 12192 ft: 13825 corp: 8/93b lim: 30 exec/s: 0 rss: 72Mb L: 23/23 MS: 1 ChangeBinInt- 00:07:19.929 [2024-07-15 16:18:05.281873] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (785412) > buf size (4096) 00:07:19.929 [2024-07-15 16:18:05.281990] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xd095 00:07:19.929 [2024-07-15 16:18:05.282119] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d40a 00:07:19.929 [2024-07-15 16:18:05.282337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ff000227 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.929 [2024-07-15 16:18:05.282362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.929 [2024-07-15 16:18:05.282419] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:002700a6 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.929 [2024-07-15 16:18:05.282434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.929 [2024-07-15 16:18:05.282491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:85d48195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.929 [2024-07-15 16:18:05.282505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:19.929 #29 NEW cov: 12192 ft: 13850 corp: 9/111b lim: 30 exec/s: 0 rss: 72Mb L: 18/23 MS: 1 PersAutoDict- DE: "\000'\246\370\320\225\205\324"- 00:07:19.929 [2024-07-15 16:18:05.331941] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (785412) > buf size (4096) 00:07:19.929 [2024-07-15 16:18:05.332155] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ff000227 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.929 [2024-07-15 16:18:05.332180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.929 #30 NEW cov: 12192 ft: 13899 corp: 10/119b lim: 30 exec/s: 0 rss: 72Mb L: 8/23 MS: 1 EraseBytes- 00:07:19.929 [2024-07-15 16:18:05.372139] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (74756) > buf size (4096) 00:07:19.929 [2024-07-15 16:18:05.372260] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300002727 00:07:19.929 [2024-07-15 16:18:05.372376] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (40096) > buf size (4096) 00:07:19.929 [2024-07-15 16:18:05.372608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:49000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.929 [2024-07-15 16:18:05.372636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.929 [2024-07-15 16:18:05.372697] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:27278327 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.929 [2024-07-15 16:18:05.372713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.929 [2024-07-15 16:18:05.372771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:27270017 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.929 [2024-07-15 16:18:05.372786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:19.929 #31 NEW cov: 12192 ft: 14002 corp: 11/142b lim: 30 exec/s: 0 rss: 72Mb L: 23/23 MS: 1 ChangeBinInt- 00:07:19.929 [2024-07-15 16:18:05.412205] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (304424) > buf size (4096) 00:07:19.929 [2024-07-15 16:18:05.412438] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:29498100 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.929 [2024-07-15 16:18:05.412462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.929 #32 NEW cov: 12192 ft: 14079 corp: 12/153b lim: 30 exec/s: 0 rss: 72Mb L: 11/23 MS: 1 CopyPart- 00:07:19.929 [2024-07-15 16:18:05.452289] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (785412) > buf size (4096) 00:07:19.929 [2024-07-15 16:18:05.452505] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ff000227 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.929 [2024-07-15 16:18:05.452536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.929 #33 NEW cov: 12192 ft: 14106 corp: 13/163b lim: 30 exec/s: 0 rss: 72Mb L: 10/23 MS: 1 ChangeByte- 00:07:19.929 [2024-07-15 16:18:05.492462] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (74756) > buf size (4096) 00:07:19.929 [2024-07-15 16:18:05.492689] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:49000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.929 [2024-07-15 16:18:05.492716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.189 NEW_FUNC[1/1]: 0x1a7eaf0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:20.189 #35 NEW cov: 12215 ft: 14167 corp: 14/172b lim: 30 exec/s: 0 rss: 73Mb L: 9/23 MS: 2 CopyPart-PersAutoDict- DE: "I\000\000\000\000\000\000\000"- 00:07:20.189 [2024-07-15 16:18:05.532574] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (336900) > buf size (4096) 00:07:20.189 [2024-07-15 16:18:05.532900] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:49008100 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.189 [2024-07-15 16:18:05.532927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.189 [2024-07-15 16:18:05.532985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.189 [2024-07-15 16:18:05.533001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.189 #39 NEW cov: 12232 ft: 14486 corp: 15/185b lim: 30 exec/s: 0 rss: 73Mb L: 13/23 MS: 4 EraseBytes-ChangeBinInt-CopyPart-CrossOver- 00:07:20.189 [2024-07-15 16:18:05.582637] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (38056) > buf size (4096) 00:07:20.189 [2024-07-15 16:18:05.582873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:25290049 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.189 [2024-07-15 16:18:05.582899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.189 #40 NEW cov: 12232 ft: 14532 corp: 16/195b lim: 30 exec/s: 0 rss: 73Mb L: 10/23 MS: 1 InsertByte- 00:07:20.189 [2024-07-15 16:18:05.622814] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (261416) > buf size (4096) 00:07:20.189 [2024-07-15 16:18:05.622937] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x27a6 00:07:20.189 [2024-07-15 16:18:05.623048] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d40a 00:07:20.189 [2024-07-15 16:18:05.623264] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ff490000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.189 [2024-07-15 16:18:05.623291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.189 [2024-07-15 16:18:05.623349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.189 [2024-07-15 16:18:05.623365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.189 [2024-07-15 16:18:05.623422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:f8d08195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.189 [2024-07-15 16:18:05.623438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.189 #41 NEW cov: 12232 ft: 14563 corp: 17/213b lim: 30 exec/s: 41 rss: 73Mb L: 18/23 MS: 1 PersAutoDict- DE: "I\000\000\000\000\000\000\000"- 00:07:20.189 [2024-07-15 16:18:05.662992] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (785412) > buf size (4096) 00:07:20.189 [2024-07-15 16:18:05.663114] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300000027 00:07:20.189 [2024-07-15 16:18:05.663232] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xa0a 00:07:20.189 [2024-07-15 16:18:05.663461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ff000227 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.189 [2024-07-15 16:18:05.663490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.189 [2024-07-15 16:18:05.663551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:958583d4 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.189 [2024-07-15 16:18:05.663569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.189 [2024-07-15 16:18:05.663627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:a6f800d0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.189 [2024-07-15 16:18:05.663643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.190 #42 NEW cov: 12232 ft: 14567 corp: 18/231b lim: 30 exec/s: 42 rss: 73Mb L: 18/23 MS: 1 CrossOver- 00:07:20.190 [2024-07-15 16:18:05.703066] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xff 00:07:20.190 [2024-07-15 16:18:05.703420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:29490000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.190 [2024-07-15 16:18:05.703449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.190 [2024-07-15 16:18:05.703511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.190 [2024-07-15 16:18:05.703538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.190 #43 NEW cov: 12232 ft: 14643 corp: 19/243b lim: 30 exec/s: 43 rss: 73Mb L: 12/23 MS: 1 InsertRepeatedBytes- 00:07:20.190 [2024-07-15 16:18:05.763169] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xd095 00:07:20.190 [2024-07-15 16:18:05.763390] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:002700a6 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.190 [2024-07-15 16:18:05.763419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.449 #44 NEW cov: 12232 ft: 14712 corp: 20/252b lim: 30 exec/s: 44 rss: 73Mb L: 9/23 MS: 1 PersAutoDict- DE: "\000'\246\370\320\225\205\324"- 00:07:20.449 [2024-07-15 16:18:05.803350] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (74756) > buf size (4096) 00:07:20.449 [2024-07-15 16:18:05.803490] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300002727 00:07:20.449 [2024-07-15 16:18:05.803610] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (40096) > buf size (4096) 00:07:20.449 [2024-07-15 16:18:05.803829] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:49000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.449 [2024-07-15 16:18:05.803856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.449 [2024-07-15 16:18:05.803917] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:27278327 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.449 [2024-07-15 16:18:05.803932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.449 [2024-07-15 16:18:05.803987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:27270017 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.449 [2024-07-15 16:18:05.804001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.449 #45 NEW cov: 12232 ft: 14722 corp: 21/275b lim: 30 exec/s: 45 rss: 73Mb L: 23/23 MS: 1 ShuffleBytes- 00:07:20.449 [2024-07-15 16:18:05.853452] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000f8c9 00:07:20.449 [2024-07-15 16:18:05.853693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ff000227 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.449 [2024-07-15 16:18:05.853719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.449 #46 NEW cov: 12232 ft: 14798 corp: 22/285b lim: 30 exec/s: 46 rss: 73Mb L: 10/23 MS: 1 ChangeByte- 00:07:20.449 [2024-07-15 16:18:05.913805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:000000ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.449 [2024-07-15 16:18:05.913833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.449 #47 NEW cov: 12232 ft: 14846 corp: 23/291b lim: 30 exec/s: 47 rss: 73Mb L: 6/23 MS: 1 EraseBytes- 00:07:20.449 [2024-07-15 16:18:05.953762] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (261416) > buf size (4096) 00:07:20.449 [2024-07-15 16:18:05.953982] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d40a 00:07:20.449 [2024-07-15 16:18:05.954199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ff490000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.449 [2024-07-15 16:18:05.954226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.449 [2024-07-15 16:18:05.954284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.449 [2024-07-15 16:18:05.954305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.449 [2024-07-15 16:18:05.954362] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:f8d08195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.449 [2024-07-15 16:18:05.954377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.449 #48 NEW cov: 12232 ft: 14864 corp: 24/309b lim: 30 exec/s: 48 rss: 73Mb L: 18/23 MS: 1 ChangeBinInt- 00:07:20.450 [2024-07-15 16:18:06.003907] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (74756) > buf size (4096) 00:07:20.450 [2024-07-15 16:18:06.004027] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300002727 00:07:20.450 [2024-07-15 16:18:06.004139] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300002727 00:07:20.450 [2024-07-15 16:18:06.004356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:49000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.450 [2024-07-15 16:18:06.004383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.450 [2024-07-15 16:18:06.004440] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:27278327 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.450 [2024-07-15 16:18:06.004455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.450 [2024-07-15 16:18:06.004482] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:27278327 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.450 [2024-07-15 16:18:06.004496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.450 #49 NEW cov: 12232 ft: 14874 corp: 25/332b lim: 30 exec/s: 49 rss: 73Mb L: 23/23 MS: 1 CMP- DE: "\001\000"- 00:07:20.709 [2024-07-15 16:18:06.043951] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000f8a6 00:07:20.709 [2024-07-15 16:18:06.044172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:85008395 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.709 [2024-07-15 16:18:06.044199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.709 #50 NEW cov: 12232 ft: 14925 corp: 26/342b lim: 30 exec/s: 50 rss: 73Mb L: 10/23 MS: 1 ShuffleBytes- 00:07:20.709 [2024-07-15 16:18:06.084167] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (74756) > buf size (4096) 00:07:20.709 [2024-07-15 16:18:06.084290] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300002727 00:07:20.709 [2024-07-15 16:18:06.084401] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000fdfd 00:07:20.709 [2024-07-15 16:18:06.084512] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x27 00:07:20.709 [2024-07-15 16:18:06.084745] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:49000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.709 [2024-07-15 16:18:06.084771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.709 [2024-07-15 16:18:06.084832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:27278327 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.709 [2024-07-15 16:18:06.084847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.709 [2024-07-15 16:18:06.084905] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:27fd81fd cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.709 [2024-07-15 16:18:06.084920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.709 [2024-07-15 16:18:06.084985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:27170000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.709 [2024-07-15 16:18:06.085000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:20.709 #51 NEW cov: 12232 ft: 15421 corp: 27/370b lim: 30 exec/s: 51 rss: 73Mb L: 28/28 MS: 1 InsertRepeatedBytes- 00:07:20.709 [2024-07-15 16:18:06.134277] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (74756) > buf size (4096) 00:07:20.709 [2024-07-15 16:18:06.134400] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300002727 00:07:20.709 [2024-07-15 16:18:06.134515] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (40096) > buf size (4096) 00:07:20.709 [2024-07-15 16:18:06.134739] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:49000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.709 [2024-07-15 16:18:06.134765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.709 [2024-07-15 16:18:06.134822] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:27278327 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.709 [2024-07-15 16:18:06.134838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.709 [2024-07-15 16:18:06.134893] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:27270017 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.709 [2024-07-15 16:18:06.134907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.709 #52 NEW cov: 12232 ft: 15476 corp: 28/393b lim: 30 exec/s: 52 rss: 73Mb L: 23/28 MS: 1 ChangeByte- 00:07:20.710 [2024-07-15 16:18:06.174355] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (861188) > buf size (4096) 00:07:20.710 [2024-07-15 16:18:06.174480] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (786436) > buf size (4096) 00:07:20.710 [2024-07-15 16:18:06.174708] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:49008327 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.710 [2024-07-15 16:18:06.174735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.710 [2024-07-15 16:18:06.174793] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00008327 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.710 [2024-07-15 16:18:06.174809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.710 #53 NEW cov: 12232 ft: 15491 corp: 29/406b lim: 30 exec/s: 53 rss: 73Mb L: 13/28 MS: 1 EraseBytes- 00:07:20.710 [2024-07-15 16:18:06.214429] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262304) > buf size (4096) 00:07:20.710 [2024-07-15 16:18:06.214671] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:002781a6 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.710 [2024-07-15 16:18:06.214696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.710 #54 NEW cov: 12232 ft: 15500 corp: 30/415b lim: 30 exec/s: 54 rss: 73Mb L: 9/28 MS: 1 CMP- DE: "\000'\246\371h\010\007\350"- 00:07:20.710 [2024-07-15 16:18:06.254607] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xff 00:07:20.710 [2024-07-15 16:18:06.254925] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:29320000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.710 [2024-07-15 16:18:06.254952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.710 [2024-07-15 16:18:06.255014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.710 [2024-07-15 16:18:06.255030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.969 #55 NEW cov: 12232 ft: 15570 corp: 31/427b lim: 30 exec/s: 55 rss: 73Mb L: 12/28 MS: 1 ChangeByte- 00:07:20.969 [2024-07-15 16:18:06.304780] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (261416) > buf size (4096) 00:07:20.969 [2024-07-15 16:18:06.304906] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x27a6 00:07:20.969 [2024-07-15 16:18:06.305019] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100000027 00:07:20.969 [2024-07-15 16:18:06.305133] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (170984) > buf size (4096) 00:07:20.969 [2024-07-15 16:18:06.305373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ff490000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.969 [2024-07-15 16:18:06.305398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.969 [2024-07-15 16:18:06.305458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.969 [2024-07-15 16:18:06.305472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.969 [2024-07-15 16:18:06.305534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:f8d08195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.969 [2024-07-15 16:18:06.305548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.969 [2024-07-15 16:18:06.305605] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:a6f90068 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.969 [2024-07-15 16:18:06.305620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:20.969 #56 NEW cov: 12232 ft: 15574 corp: 32/453b lim: 30 exec/s: 56 rss: 73Mb L: 26/28 MS: 1 PersAutoDict- DE: "\000'\246\371h\010\007\350"- 00:07:20.969 [2024-07-15 16:18:06.344881] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (785412) > buf size (4096) 00:07:20.969 [2024-07-15 16:18:06.345004] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xd095 00:07:20.969 [2024-07-15 16:18:06.345132] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x9585 00:07:20.969 [2024-07-15 16:18:06.345357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ff000227 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.969 [2024-07-15 16:18:06.345385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.969 [2024-07-15 16:18:06.345448] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:002700a6 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.969 [2024-07-15 16:18:06.345463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.969 [2024-07-15 16:18:06.345523] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:85d40095 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.970 [2024-07-15 16:18:06.345544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.970 #57 NEW cov: 12232 ft: 15583 corp: 33/471b lim: 30 exec/s: 57 rss: 73Mb L: 18/28 MS: 1 CopyPart- 00:07:20.970 [2024-07-15 16:18:06.395031] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (74756) > buf size (4096) 00:07:20.970 [2024-07-15 16:18:06.395154] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300002727 00:07:20.970 [2024-07-15 16:18:06.395274] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (40096) > buf size (4096) 00:07:20.970 [2024-07-15 16:18:06.395490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:49000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.970 [2024-07-15 16:18:06.395518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.970 [2024-07-15 16:18:06.395585] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:27278327 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.970 [2024-07-15 16:18:06.395602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.970 [2024-07-15 16:18:06.395657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:27270017 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.970 [2024-07-15 16:18:06.395672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.970 #58 NEW cov: 12232 ft: 15593 corp: 34/494b lim: 30 exec/s: 58 rss: 73Mb L: 23/28 MS: 1 ChangeByte- 00:07:20.970 [2024-07-15 16:18:06.435093] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (336900) > buf size (4096) 00:07:20.970 [2024-07-15 16:18:06.435431] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:490081b3 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.970 [2024-07-15 16:18:06.435458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.970 [2024-07-15 16:18:06.435518] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00490000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.970 [2024-07-15 16:18:06.435540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.970 #64 NEW cov: 12232 ft: 15629 corp: 35/511b lim: 30 exec/s: 64 rss: 73Mb L: 17/28 MS: 1 CMP- DE: "\263\001\000\000"- 00:07:20.970 [2024-07-15 16:18:06.485227] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (304424) > buf size (4096) 00:07:20.970 [2024-07-15 16:18:06.485454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:29498100 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.970 [2024-07-15 16:18:06.485478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.970 #65 NEW cov: 12232 ft: 15691 corp: 36/522b lim: 30 exec/s: 65 rss: 73Mb L: 11/28 MS: 1 ShuffleBytes- 00:07:20.970 [2024-07-15 16:18:06.535395] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (785412) > buf size (4096) 00:07:20.970 [2024-07-15 16:18:06.535521] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200009a51 00:07:20.970 [2024-07-15 16:18:06.535646] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000000a 00:07:20.970 [2024-07-15 16:18:06.535861] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ff000227 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.970 [2024-07-15 16:18:06.535887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.970 [2024-07-15 16:18:06.535944] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:002702a6 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.970 [2024-07-15 16:18:06.535960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.970 [2024-07-15 16:18:06.536015] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:94f983a6 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.970 [2024-07-15 16:18:06.536030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.229 #66 NEW cov: 12232 ft: 15701 corp: 37/540b lim: 30 exec/s: 66 rss: 73Mb L: 18/28 MS: 1 CMP- DE: "J\232Q\224\371\246'\000"- 00:07:21.229 [2024-07-15 16:18:06.575505] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (74756) > buf size (4096) 00:07:21.229 [2024-07-15 16:18:06.575636] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300002727 00:07:21.229 [2024-07-15 16:18:06.575750] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (40096) > buf size (4096) 00:07:21.229 [2024-07-15 16:18:06.575964] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:49000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.229 [2024-07-15 16:18:06.575991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.229 [2024-07-15 16:18:06.576049] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:27278327 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.229 [2024-07-15 16:18:06.576065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.229 [2024-07-15 16:18:06.576118] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:27270017 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.230 [2024-07-15 16:18:06.576133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.230 #67 NEW cov: 12232 ft: 15717 corp: 38/563b lim: 30 exec/s: 67 rss: 73Mb L: 23/28 MS: 1 ChangeBit- 00:07:21.230 [2024-07-15 16:18:06.615632] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (304424) > buf size (4096) 00:07:21.230 [2024-07-15 16:18:06.615758] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x1 00:07:21.230 [2024-07-15 16:18:06.616094] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:29498100 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.230 [2024-07-15 16:18:06.616121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.230 [2024-07-15 16:18:06.616179] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.230 [2024-07-15 16:18:06.616196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.230 [2024-07-15 16:18:06.616251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.230 [2024-07-15 16:18:06.616266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.230 #68 NEW cov: 12232 ft: 15727 corp: 39/582b lim: 30 exec/s: 34 rss: 73Mb L: 19/28 MS: 1 CMP- DE: "\001\000\000\000\000\000\000?"- 00:07:21.230 #68 DONE cov: 12232 ft: 15727 corp: 39/582b lim: 30 exec/s: 34 rss: 73Mb 00:07:21.230 ###### Recommended dictionary. ###### 00:07:21.230 "I\000\000\000\000\000\000\000" # Uses: 4 00:07:21.230 "\000'\246\370\320\225\205\324" # Uses: 2 00:07:21.230 "\001\000" # Uses: 0 00:07:21.230 "\000'\246\371h\010\007\350" # Uses: 1 00:07:21.230 "\263\001\000\000" # Uses: 0 00:07:21.230 "J\232Q\224\371\246'\000" # Uses: 0 00:07:21.230 "\001\000\000\000\000\000\000?" # Uses: 0 00:07:21.230 ###### End of recommended dictionary. ###### 00:07:21.230 Done 68 runs in 2 second(s) 00:07:21.230 16:18:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_1.conf /var/tmp/suppress_nvmf_fuzz 00:07:21.230 16:18:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:21.230 16:18:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:21.230 16:18:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:07:21.230 16:18:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=2 00:07:21.230 16:18:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:21.230 16:18:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:21.230 16:18:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:07:21.230 16:18:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_2.conf 00:07:21.230 16:18:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:21.230 16:18:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:21.230 16:18:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 2 00:07:21.230 16:18:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4402 00:07:21.230 16:18:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:07:21.230 16:18:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' 00:07:21.230 16:18:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4402"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:21.230 16:18:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:21.230 16:18:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:21.230 16:18:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' -c /tmp/fuzz_json_2.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 -Z 2 00:07:21.489 [2024-07-15 16:18:06.822277] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:07:21.489 [2024-07-15 16:18:06.822356] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1515554 ] 00:07:21.489 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.489 [2024-07-15 16:18:07.003699] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.748 [2024-07-15 16:18:07.076959] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.748 [2024-07-15 16:18:07.136757] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:21.748 [2024-07-15 16:18:07.152945] tcp.c: 993:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4402 *** 00:07:21.748 INFO: Running with entropic power schedule (0xFF, 100). 00:07:21.748 INFO: Seed: 205587685 00:07:21.748 INFO: Loaded 1 modules (357850 inline 8-bit counters): 357850 [0x29ab30c, 0x2a028e6), 00:07:21.748 INFO: Loaded 1 PC tables (357850 PCs): 357850 [0x2a028e8,0x2f78688), 00:07:21.748 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:07:21.748 INFO: A corpus is not provided, starting from an empty corpus 00:07:21.748 #2 INITED exec/s: 0 rss: 65Mb 00:07:21.748 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:21.748 This may also happen if the target rejected all inputs we tried so far 00:07:21.748 [2024-07-15 16:18:07.223603] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.748 [2024-07-15 16:18:07.223642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.008 NEW_FUNC[1/697]: 0x487230 in fuzz_admin_identify_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:95 00:07:22.008 NEW_FUNC[2/697]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:22.008 #3 NEW cov: 11904 ft: 11905 corp: 2/9b lim: 35 exec/s: 0 rss: 72Mb L: 8/8 MS: 1 InsertRepeatedBytes- 00:07:22.008 [2024-07-15 16:18:07.574734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.008 [2024-07-15 16:18:07.574781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.267 #6 NEW cov: 12034 ft: 12615 corp: 3/17b lim: 35 exec/s: 0 rss: 72Mb L: 8/8 MS: 3 ChangeBit-InsertByte-InsertRepeatedBytes- 00:07:22.267 [2024-07-15 16:18:07.625680] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.267 [2024-07-15 16:18:07.625708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.267 [2024-07-15 16:18:07.625794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.267 [2024-07-15 16:18:07.625810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.267 [2024-07-15 16:18:07.625893] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.267 [2024-07-15 16:18:07.625906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.267 [2024-07-15 16:18:07.625998] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.267 [2024-07-15 16:18:07.626013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:22.267 #9 NEW cov: 12040 ft: 13446 corp: 4/46b lim: 35 exec/s: 0 rss: 72Mb L: 29/29 MS: 3 EraseBytes-EraseBytes-InsertRepeatedBytes- 00:07:22.267 [2024-07-15 16:18:07.685329] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:22.267 [2024-07-15 16:18:07.685591] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:22.267 [2024-07-15 16:18:07.685846] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:22.267 [2024-07-15 16:18:07.686094] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:22.267 [2024-07-15 16:18:07.686585] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.267 [2024-07-15 16:18:07.686614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.267 [2024-07-15 16:18:07.686708] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.267 [2024-07-15 16:18:07.686726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.267 [2024-07-15 16:18:07.686815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.267 [2024-07-15 16:18:07.686836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.267 [2024-07-15 16:18:07.686926] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.267 [2024-07-15 16:18:07.686943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:22.267 [2024-07-15 16:18:07.687039] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.267 [2024-07-15 16:18:07.687058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:22.267 #13 NEW cov: 12134 ft: 13786 corp: 5/81b lim: 35 exec/s: 0 rss: 72Mb L: 35/35 MS: 4 CrossOver-ChangeBit-EraseBytes-InsertRepeatedBytes- 00:07:22.267 [2024-07-15 16:18:07.735682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.267 [2024-07-15 16:18:07.735707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.267 #14 NEW cov: 12134 ft: 13875 corp: 6/89b lim: 35 exec/s: 0 rss: 72Mb L: 8/35 MS: 1 ChangeBit- 00:07:22.267 [2024-07-15 16:18:07.785868] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ff7500ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.267 [2024-07-15 16:18:07.785893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.267 #15 NEW cov: 12134 ft: 13941 corp: 7/97b lim: 35 exec/s: 0 rss: 72Mb L: 8/35 MS: 1 ChangeByte- 00:07:22.527 [2024-07-15 16:18:07.847394] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ff7500ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.527 [2024-07-15 16:18:07.847422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.527 [2024-07-15 16:18:07.847512] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff001a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.527 [2024-07-15 16:18:07.847532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.527 [2024-07-15 16:18:07.847618] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.527 [2024-07-15 16:18:07.847634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.527 [2024-07-15 16:18:07.847720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.527 [2024-07-15 16:18:07.847734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:22.527 #16 NEW cov: 12134 ft: 14016 corp: 8/127b lim: 35 exec/s: 0 rss: 72Mb L: 30/35 MS: 1 CrossOver- 00:07:22.527 [2024-07-15 16:18:07.916190] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:22.527 [2024-07-15 16:18:07.916447] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:22.527 [2024-07-15 16:18:07.916687] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:22.527 [2024-07-15 16:18:07.917161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.527 [2024-07-15 16:18:07.917192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.527 [2024-07-15 16:18:07.917293] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.527 [2024-07-15 16:18:07.917313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.527 [2024-07-15 16:18:07.917409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.527 [2024-07-15 16:18:07.917426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.527 #19 NEW cov: 12134 ft: 14230 corp: 9/149b lim: 35 exec/s: 0 rss: 72Mb L: 22/35 MS: 3 ChangeByte-ShuffleBytes-InsertRepeatedBytes- 00:07:22.527 [2024-07-15 16:18:07.966736] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:22.527 [2024-07-15 16:18:07.966992] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:22.527 [2024-07-15 16:18:07.967251] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:22.527 [2024-07-15 16:18:07.967728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.527 [2024-07-15 16:18:07.967755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.527 [2024-07-15 16:18:07.967849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.527 [2024-07-15 16:18:07.967869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.527 [2024-07-15 16:18:07.967964] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.527 [2024-07-15 16:18:07.967980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.527 [2024-07-15 16:18:07.968068] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.527 [2024-07-15 16:18:07.968084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:22.527 #20 NEW cov: 12134 ft: 14305 corp: 10/179b lim: 35 exec/s: 0 rss: 72Mb L: 30/35 MS: 1 EraseBytes- 00:07:22.527 [2024-07-15 16:18:08.028032] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:0100ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.527 [2024-07-15 16:18:08.028057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.528 [2024-07-15 16:18:08.028151] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.528 [2024-07-15 16:18:08.028169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.528 [2024-07-15 16:18:08.028263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.528 [2024-07-15 16:18:08.028278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.528 [2024-07-15 16:18:08.028376] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.528 [2024-07-15 16:18:08.028391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:22.528 #21 NEW cov: 12134 ft: 14343 corp: 11/210b lim: 35 exec/s: 0 rss: 72Mb L: 31/35 MS: 1 CMP- DE: "\001\000"- 00:07:22.528 [2024-07-15 16:18:08.086931] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:22.528 [2024-07-15 16:18:08.087189] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:22.528 [2024-07-15 16:18:08.087458] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:22.528 [2024-07-15 16:18:08.087905] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:16000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.528 [2024-07-15 16:18:08.087934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.528 [2024-07-15 16:18:08.088027] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.528 [2024-07-15 16:18:08.088047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.528 [2024-07-15 16:18:08.088135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.528 [2024-07-15 16:18:08.088151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.787 NEW_FUNC[1/1]: 0x1a7eaf0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:22.787 #22 NEW cov: 12157 ft: 14406 corp: 12/232b lim: 35 exec/s: 0 rss: 73Mb L: 22/35 MS: 1 ChangeBinInt- 00:07:22.787 #23 NEW cov: 12157 ft: 14958 corp: 13/248b lim: 35 exec/s: 23 rss: 73Mb L: 16/35 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\002"- 00:07:22.787 [2024-07-15 16:18:08.227875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:59ff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.787 [2024-07-15 16:18:08.227902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.787 #24 NEW cov: 12157 ft: 14979 corp: 14/257b lim: 35 exec/s: 24 rss: 73Mb L: 9/35 MS: 1 InsertByte- 00:07:22.787 [2024-07-15 16:18:08.277996] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ff7500ff cdw11:ff00faff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.787 [2024-07-15 16:18:08.278023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.787 #25 NEW cov: 12157 ft: 14990 corp: 15/265b lim: 35 exec/s: 25 rss: 73Mb L: 8/35 MS: 1 ChangeBinInt- 00:07:22.787 [2024-07-15 16:18:08.328373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0030000a cdw11:00000100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.787 [2024-07-15 16:18:08.328399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.787 #29 NEW cov: 12157 ft: 15075 corp: 16/278b lim: 35 exec/s: 29 rss: 73Mb L: 13/35 MS: 4 CrossOver-InsertByte-InsertByte-PersAutoDict- DE: "\001\000\000\000\000\000\000\002"- 00:07:23.046 [2024-07-15 16:18:08.377938] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:23.046 [2024-07-15 16:18:08.378410] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.046 [2024-07-15 16:18:08.378439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.046 #30 NEW cov: 12157 ft: 15130 corp: 17/286b lim: 35 exec/s: 30 rss: 73Mb L: 8/35 MS: 1 ChangeBinInt- 00:07:23.046 [2024-07-15 16:18:08.428936] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:23.046 [2024-07-15 16:18:08.429221] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:23.046 [2024-07-15 16:18:08.429722] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.046 [2024-07-15 16:18:08.429753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.046 [2024-07-15 16:18:08.429842] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:0e000016 cdw11:0000d700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.046 [2024-07-15 16:18:08.429858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.046 [2024-07-15 16:18:08.429955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.046 [2024-07-15 16:18:08.429972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.046 [2024-07-15 16:18:08.430062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.046 [2024-07-15 16:18:08.430085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:23.046 #31 NEW cov: 12157 ft: 15200 corp: 18/316b lim: 35 exec/s: 31 rss: 73Mb L: 30/35 MS: 1 CrossOver- 00:07:23.046 [2024-07-15 16:18:08.488953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:3001000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.046 [2024-07-15 16:18:08.488981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.046 #35 NEW cov: 12157 ft: 15222 corp: 19/326b lim: 35 exec/s: 35 rss: 73Mb L: 10/35 MS: 4 InsertByte-ShuffleBytes-ShuffleBytes-PersAutoDict- DE: "\001\000\000\000\000\000\000\002"- 00:07:23.046 [2024-07-15 16:18:08.539025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ff0a000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.046 [2024-07-15 16:18:08.539051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.046 #37 NEW cov: 12157 ft: 15233 corp: 20/334b lim: 35 exec/s: 37 rss: 73Mb L: 8/35 MS: 2 CrossOver-CopyPart- 00:07:23.046 [2024-07-15 16:18:08.589286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.046 [2024-07-15 16:18:08.589313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.046 #38 NEW cov: 12157 ft: 15238 corp: 21/342b lim: 35 exec/s: 38 rss: 73Mb L: 8/35 MS: 1 ShuffleBytes- 00:07:23.306 [2024-07-15 16:18:08.638965] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:23.306 [2024-07-15 16:18:08.639422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:40000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.306 [2024-07-15 16:18:08.639452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.306 #39 NEW cov: 12157 ft: 15296 corp: 22/350b lim: 35 exec/s: 39 rss: 73Mb L: 8/35 MS: 1 ChangeBit- 00:07:23.306 [2024-07-15 16:18:08.699232] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:23.306 [2024-07-15 16:18:08.699499] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:23.306 [2024-07-15 16:18:08.699965] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.306 [2024-07-15 16:18:08.699994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.306 [2024-07-15 16:18:08.700096] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.306 [2024-07-15 16:18:08.700113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.306 #40 NEW cov: 12157 ft: 15440 corp: 23/369b lim: 35 exec/s: 40 rss: 73Mb L: 19/35 MS: 1 EraseBytes- 00:07:23.306 #41 NEW cov: 12157 ft: 15490 corp: 24/377b lim: 35 exec/s: 41 rss: 73Mb L: 8/35 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\002"- 00:07:23.306 [2024-07-15 16:18:08.799711] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:23.306 [2024-07-15 16:18:08.800217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:40300000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.306 [2024-07-15 16:18:08.800247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.306 #42 NEW cov: 12157 ft: 15514 corp: 25/385b lim: 35 exec/s: 42 rss: 73Mb L: 8/35 MS: 1 ChangeByte- 00:07:23.568 #43 NEW cov: 12157 ft: 15525 corp: 26/401b lim: 35 exec/s: 43 rss: 73Mb L: 16/35 MS: 1 PersAutoDict- DE: "\001\000"- 00:07:23.568 [2024-07-15 16:18:08.920962] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00acff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.568 [2024-07-15 16:18:08.920989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.568 #44 NEW cov: 12157 ft: 15550 corp: 27/410b lim: 35 exec/s: 44 rss: 73Mb L: 9/35 MS: 1 InsertByte- 00:07:23.568 [2024-07-15 16:18:08.982152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:0100ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.568 [2024-07-15 16:18:08.982177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.568 [2024-07-15 16:18:08.982301] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.568 [2024-07-15 16:18:08.982316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.568 [2024-07-15 16:18:08.982409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.568 [2024-07-15 16:18:08.982423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.568 [2024-07-15 16:18:08.982512] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.568 [2024-07-15 16:18:08.982530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:23.568 #45 NEW cov: 12157 ft: 15601 corp: 28/441b lim: 35 exec/s: 45 rss: 73Mb L: 31/35 MS: 1 ShuffleBytes- 00:07:23.568 #46 NEW cov: 12157 ft: 15611 corp: 29/450b lim: 35 exec/s: 46 rss: 73Mb L: 9/35 MS: 1 InsertByte- 00:07:23.568 [2024-07-15 16:18:09.102031] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:23.568 [2024-07-15 16:18:09.102290] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:23.568 [2024-07-15 16:18:09.102764] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.568 [2024-07-15 16:18:09.102792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.568 [2024-07-15 16:18:09.102888] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:0e000016 cdw11:0000d700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.568 [2024-07-15 16:18:09.102904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.568 [2024-07-15 16:18:09.103003] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.568 [2024-07-15 16:18:09.103019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.568 [2024-07-15 16:18:09.103110] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.568 [2024-07-15 16:18:09.103129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:23.568 #47 NEW cov: 12157 ft: 15633 corp: 30/482b lim: 35 exec/s: 47 rss: 73Mb L: 32/35 MS: 1 CrossOver- 00:07:23.827 [2024-07-15 16:18:09.162086] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:23.827 [2024-07-15 16:18:09.162345] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:23.827 [2024-07-15 16:18:09.162608] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:23.827 [2024-07-15 16:18:09.162855] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:23.827 [2024-07-15 16:18:09.163097] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:23.827 [2024-07-15 16:18:09.163537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.827 [2024-07-15 16:18:09.163566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.827 [2024-07-15 16:18:09.163655] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.827 [2024-07-15 16:18:09.163673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.827 [2024-07-15 16:18:09.163768] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.827 [2024-07-15 16:18:09.163786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.827 [2024-07-15 16:18:09.163878] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.827 [2024-07-15 16:18:09.163895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:23.827 [2024-07-15 16:18:09.163991] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.827 [2024-07-15 16:18:09.164010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:23.827 #48 NEW cov: 12157 ft: 15646 corp: 31/517b lim: 35 exec/s: 48 rss: 73Mb L: 35/35 MS: 1 CopyPart- 00:07:23.827 [2024-07-15 16:18:09.212594] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ff0000ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.827 [2024-07-15 16:18:09.212619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.827 #51 NEW cov: 12157 ft: 15664 corp: 32/529b lim: 35 exec/s: 25 rss: 73Mb L: 12/35 MS: 3 EraseBytes-ShuffleBytes-InsertRepeatedBytes- 00:07:23.827 #51 DONE cov: 12157 ft: 15664 corp: 32/529b lim: 35 exec/s: 25 rss: 73Mb 00:07:23.827 ###### Recommended dictionary. ###### 00:07:23.827 "\001\000" # Uses: 1 00:07:23.827 "\001\000\000\000\000\000\000\002" # Uses: 3 00:07:23.827 ###### End of recommended dictionary. ###### 00:07:23.827 Done 51 runs in 2 second(s) 00:07:23.827 16:18:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_2.conf /var/tmp/suppress_nvmf_fuzz 00:07:23.827 16:18:09 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:23.827 16:18:09 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:23.827 16:18:09 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:07:23.827 16:18:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=3 00:07:23.827 16:18:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:23.827 16:18:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:23.827 16:18:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:07:23.827 16:18:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_3.conf 00:07:23.827 16:18:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:23.827 16:18:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:23.827 16:18:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 3 00:07:23.827 16:18:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4403 00:07:23.827 16:18:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:07:23.827 16:18:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' 00:07:23.827 16:18:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4403"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:23.827 16:18:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:23.827 16:18:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:23.827 16:18:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' -c /tmp/fuzz_json_3.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 -Z 3 00:07:24.086 [2024-07-15 16:18:09.406939] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:07:24.086 [2024-07-15 16:18:09.407020] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1515923 ] 00:07:24.086 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.086 [2024-07-15 16:18:09.597845] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.344 [2024-07-15 16:18:09.670174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.344 [2024-07-15 16:18:09.730139] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:24.344 [2024-07-15 16:18:09.746342] tcp.c: 993:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4403 *** 00:07:24.344 INFO: Running with entropic power schedule (0xFF, 100). 00:07:24.344 INFO: Seed: 2799583925 00:07:24.344 INFO: Loaded 1 modules (357850 inline 8-bit counters): 357850 [0x29ab30c, 0x2a028e6), 00:07:24.344 INFO: Loaded 1 PC tables (357850 PCs): 357850 [0x2a028e8,0x2f78688), 00:07:24.344 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:07:24.344 INFO: A corpus is not provided, starting from an empty corpus 00:07:24.344 #2 INITED exec/s: 0 rss: 65Mb 00:07:24.344 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:24.344 This may also happen if the target rejected all inputs we tried so far 00:07:24.601 NEW_FUNC[1/686]: 0x488f00 in fuzz_admin_abort_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:114 00:07:24.602 NEW_FUNC[2/686]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:24.602 #4 NEW cov: 11815 ft: 11816 corp: 2/12b lim: 20 exec/s: 0 rss: 72Mb L: 11/11 MS: 2 ChangeByte-InsertRepeatedBytes- 00:07:24.860 #5 NEW cov: 11945 ft: 12484 corp: 3/23b lim: 20 exec/s: 0 rss: 72Mb L: 11/11 MS: 1 ChangeBinInt- 00:07:24.860 [2024-07-15 16:18:10.202960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:24.860 [2024-07-15 16:18:10.203008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.860 NEW_FUNC[1/20]: 0x11db1b0 in nvmf_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3359 00:07:24.860 NEW_FUNC[2/20]: 0x11dbd30 in nvmf_qpair_abort_aer /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3301 00:07:24.860 #7 NEW cov: 12280 ft: 13269 corp: 4/35b lim: 20 exec/s: 0 rss: 72Mb L: 12/12 MS: 2 InsertByte-InsertRepeatedBytes- 00:07:24.860 #8 NEW cov: 12365 ft: 13504 corp: 5/47b lim: 20 exec/s: 0 rss: 72Mb L: 12/12 MS: 1 InsertByte- 00:07:24.860 #10 NEW cov: 12365 ft: 13700 corp: 6/57b lim: 20 exec/s: 0 rss: 72Mb L: 10/12 MS: 2 ChangeByte-InsertRepeatedBytes- 00:07:24.860 #11 NEW cov: 12365 ft: 13795 corp: 7/69b lim: 20 exec/s: 0 rss: 72Mb L: 12/12 MS: 1 ChangeBinInt- 00:07:24.860 #12 NEW cov: 12365 ft: 13852 corp: 8/84b lim: 20 exec/s: 0 rss: 72Mb L: 15/15 MS: 1 InsertRepeatedBytes- 00:07:25.118 #13 NEW cov: 12365 ft: 13872 corp: 9/95b lim: 20 exec/s: 0 rss: 72Mb L: 11/15 MS: 1 ShuffleBytes- 00:07:25.118 #14 NEW cov: 12365 ft: 13906 corp: 10/107b lim: 20 exec/s: 0 rss: 72Mb L: 12/15 MS: 1 ChangeBinInt- 00:07:25.118 [2024-07-15 16:18:10.513749] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:25.118 [2024-07-15 16:18:10.513785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.118 #15 NEW cov: 12365 ft: 14039 corp: 11/119b lim: 20 exec/s: 0 rss: 72Mb L: 12/15 MS: 1 ChangeByte- 00:07:25.118 #16 NEW cov: 12365 ft: 14057 corp: 12/131b lim: 20 exec/s: 0 rss: 72Mb L: 12/15 MS: 1 CMP- DE: "\001\000\000\273"- 00:07:25.118 [2024-07-15 16:18:10.614109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:25.118 [2024-07-15 16:18:10.614138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.118 #17 NEW cov: 12365 ft: 14079 corp: 13/143b lim: 20 exec/s: 0 rss: 72Mb L: 12/15 MS: 1 ChangeASCIIInt- 00:07:25.118 [2024-07-15 16:18:10.654228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:25.118 [2024-07-15 16:18:10.654256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.118 NEW_FUNC[1/1]: 0x1a7eaf0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:25.118 #18 NEW cov: 12388 ft: 14112 corp: 14/155b lim: 20 exec/s: 0 rss: 73Mb L: 12/15 MS: 1 ChangeASCIIInt- 00:07:25.439 [2024-07-15 16:18:10.714282] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:25.439 [2024-07-15 16:18:10.714309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.439 #19 NEW cov: 12388 ft: 14159 corp: 15/167b lim: 20 exec/s: 0 rss: 73Mb L: 12/15 MS: 1 ChangeBinInt- 00:07:25.439 [2024-07-15 16:18:10.764525] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:25.439 [2024-07-15 16:18:10.764558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.439 #20 NEW cov: 12388 ft: 14166 corp: 16/179b lim: 20 exec/s: 20 rss: 73Mb L: 12/15 MS: 1 ChangeBinInt- 00:07:25.439 #21 NEW cov: 12388 ft: 14202 corp: 17/193b lim: 20 exec/s: 21 rss: 73Mb L: 14/15 MS: 1 CopyPart- 00:07:25.439 [2024-07-15 16:18:10.854674] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:25.439 [2024-07-15 16:18:10.854701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.439 #22 NEW cov: 12388 ft: 14258 corp: 18/206b lim: 20 exec/s: 22 rss: 73Mb L: 13/15 MS: 1 InsertByte- 00:07:25.439 [2024-07-15 16:18:10.904848] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:25.439 [2024-07-15 16:18:10.904875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.439 #23 NEW cov: 12388 ft: 14272 corp: 19/218b lim: 20 exec/s: 23 rss: 73Mb L: 12/15 MS: 1 ChangeASCIIInt- 00:07:25.439 [2024-07-15 16:18:10.944945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:25.439 [2024-07-15 16:18:10.944972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.439 #24 NEW cov: 12388 ft: 14310 corp: 20/230b lim: 20 exec/s: 24 rss: 73Mb L: 12/15 MS: 1 CrossOver- 00:07:25.731 [2024-07-15 16:18:10.985047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:25.731 [2024-07-15 16:18:10.985076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.731 #25 NEW cov: 12388 ft: 14329 corp: 21/243b lim: 20 exec/s: 25 rss: 73Mb L: 13/15 MS: 1 PersAutoDict- DE: "\001\000\000\273"- 00:07:25.731 #26 NEW cov: 12388 ft: 14418 corp: 22/254b lim: 20 exec/s: 26 rss: 73Mb L: 11/15 MS: 1 ChangeBinInt- 00:07:25.731 [2024-07-15 16:18:11.085403] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:25.731 [2024-07-15 16:18:11.085435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.731 #27 NEW cov: 12388 ft: 14431 corp: 23/266b lim: 20 exec/s: 27 rss: 73Mb L: 12/15 MS: 1 ChangeByte- 00:07:25.731 #28 NEW cov: 12388 ft: 14446 corp: 24/278b lim: 20 exec/s: 28 rss: 73Mb L: 12/15 MS: 1 CrossOver- 00:07:25.731 #29 NEW cov: 12405 ft: 14582 corp: 25/296b lim: 20 exec/s: 29 rss: 73Mb L: 18/18 MS: 1 CMP- DE: "\030\000\000\000"- 00:07:25.731 #30 NEW cov: 12405 ft: 14628 corp: 26/314b lim: 20 exec/s: 30 rss: 73Mb L: 18/18 MS: 1 InsertRepeatedBytes- 00:07:25.731 [2024-07-15 16:18:11.266025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:25.731 [2024-07-15 16:18:11.266054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.731 #31 NEW cov: 12405 ft: 14662 corp: 27/330b lim: 20 exec/s: 31 rss: 73Mb L: 16/18 MS: 1 InsertRepeatedBytes- 00:07:25.990 #32 NEW cov: 12405 ft: 14682 corp: 28/340b lim: 20 exec/s: 32 rss: 73Mb L: 10/18 MS: 1 EraseBytes- 00:07:25.990 #33 NEW cov: 12405 ft: 14698 corp: 29/354b lim: 20 exec/s: 33 rss: 73Mb L: 14/18 MS: 1 ChangeByte- 00:07:25.990 [2024-07-15 16:18:11.396186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:25.990 [2024-07-15 16:18:11.396216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.990 #34 NEW cov: 12405 ft: 14739 corp: 30/369b lim: 20 exec/s: 34 rss: 73Mb L: 15/18 MS: 1 EraseBytes- 00:07:25.990 #35 NEW cov: 12405 ft: 14755 corp: 31/380b lim: 20 exec/s: 35 rss: 74Mb L: 11/18 MS: 1 EraseBytes- 00:07:25.990 #36 NEW cov: 12405 ft: 14759 corp: 32/390b lim: 20 exec/s: 36 rss: 74Mb L: 10/18 MS: 1 ShuffleBytes- 00:07:25.990 [2024-07-15 16:18:11.546650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:25.990 [2024-07-15 16:18:11.546686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.249 #37 NEW cov: 12405 ft: 14769 corp: 33/402b lim: 20 exec/s: 37 rss: 74Mb L: 12/18 MS: 1 CopyPart- 00:07:26.249 [2024-07-15 16:18:11.586722] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:26.249 [2024-07-15 16:18:11.586749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.249 #38 NEW cov: 12405 ft: 14829 corp: 34/415b lim: 20 exec/s: 38 rss: 74Mb L: 13/18 MS: 1 ChangeBit- 00:07:26.249 [2024-07-15 16:18:11.636860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:26.249 [2024-07-15 16:18:11.636887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.249 #39 NEW cov: 12405 ft: 14868 corp: 35/427b lim: 20 exec/s: 39 rss: 74Mb L: 12/18 MS: 1 CMP- DE: "\002\000\000\000\000\000\000\000"- 00:07:26.249 #40 NEW cov: 12405 ft: 14879 corp: 36/446b lim: 20 exec/s: 40 rss: 74Mb L: 19/19 MS: 1 CMP- DE: "\377\377\377\377\377\377\377>"- 00:07:26.249 [2024-07-15 16:18:11.727212] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:26.249 [2024-07-15 16:18:11.727243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.249 #41 NEW cov: 12405 ft: 14881 corp: 37/458b lim: 20 exec/s: 41 rss: 74Mb L: 12/19 MS: 1 ChangeASCIIInt- 00:07:26.250 #42 NEW cov: 12405 ft: 14892 corp: 38/477b lim: 20 exec/s: 21 rss: 74Mb L: 19/19 MS: 1 ChangeBit- 00:07:26.250 #42 DONE cov: 12405 ft: 14892 corp: 38/477b lim: 20 exec/s: 21 rss: 74Mb 00:07:26.250 ###### Recommended dictionary. ###### 00:07:26.250 "\001\000\000\273" # Uses: 1 00:07:26.250 "\030\000\000\000" # Uses: 0 00:07:26.250 "\002\000\000\000\000\000\000\000" # Uses: 0 00:07:26.250 "\377\377\377\377\377\377\377>" # Uses: 0 00:07:26.250 ###### End of recommended dictionary. ###### 00:07:26.250 Done 42 runs in 2 second(s) 00:07:26.509 16:18:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_3.conf /var/tmp/suppress_nvmf_fuzz 00:07:26.509 16:18:11 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:26.509 16:18:11 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:26.509 16:18:11 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:07:26.509 16:18:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=4 00:07:26.509 16:18:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:26.509 16:18:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:26.509 16:18:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:07:26.509 16:18:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_4.conf 00:07:26.509 16:18:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:26.509 16:18:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:26.509 16:18:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 4 00:07:26.509 16:18:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4404 00:07:26.509 16:18:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:07:26.509 16:18:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' 00:07:26.509 16:18:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4404"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:26.509 16:18:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:26.509 16:18:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:26.509 16:18:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' -c /tmp/fuzz_json_4.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 -Z 4 00:07:26.509 [2024-07-15 16:18:11.985147] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:07:26.509 [2024-07-15 16:18:11.985220] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1516286 ] 00:07:26.510 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.769 [2024-07-15 16:18:12.165406] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.769 [2024-07-15 16:18:12.237385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.769 [2024-07-15 16:18:12.296881] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:26.769 [2024-07-15 16:18:12.313086] tcp.c: 993:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4404 *** 00:07:26.769 INFO: Running with entropic power schedule (0xFF, 100). 00:07:26.769 INFO: Seed: 1072596783 00:07:27.027 INFO: Loaded 1 modules (357850 inline 8-bit counters): 357850 [0x29ab30c, 0x2a028e6), 00:07:27.027 INFO: Loaded 1 PC tables (357850 PCs): 357850 [0x2a028e8,0x2f78688), 00:07:27.027 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:07:27.027 INFO: A corpus is not provided, starting from an empty corpus 00:07:27.027 #2 INITED exec/s: 0 rss: 65Mb 00:07:27.027 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:27.027 This may also happen if the target rejected all inputs we tried so far 00:07:27.027 [2024-07-15 16:18:12.368824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:fefe0afe cdw11:fefe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.027 [2024-07-15 16:18:12.368853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.027 [2024-07-15 16:18:12.368907] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:fefefefe cdw11:fefe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.027 [2024-07-15 16:18:12.368922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.028 [2024-07-15 16:18:12.368976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:fefefefe cdw11:fefe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.028 [2024-07-15 16:18:12.368991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.287 NEW_FUNC[1/698]: 0x489ff0 in fuzz_admin_create_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:126 00:07:27.287 NEW_FUNC[2/698]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:27.287 #17 NEW cov: 11925 ft: 11925 corp: 2/26b lim: 35 exec/s: 0 rss: 72Mb L: 25/25 MS: 5 ChangeBit-ShuffleBytes-CrossOver-CrossOver-InsertRepeatedBytes- 00:07:27.287 [2024-07-15 16:18:12.689708] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:fefe0afe cdw11:fefe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.287 [2024-07-15 16:18:12.689748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.287 [2024-07-15 16:18:12.689804] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:fefefefe cdw11:fefe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.287 [2024-07-15 16:18:12.689818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.288 [2024-07-15 16:18:12.689874] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:fefefefe cdw11:3a3a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.288 [2024-07-15 16:18:12.689888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.288 [2024-07-15 16:18:12.689942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:fefe3afe cdw11:fefe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.288 [2024-07-15 16:18:12.689957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:27.288 #18 NEW cov: 12055 ft: 12903 corp: 3/55b lim: 35 exec/s: 0 rss: 72Mb L: 29/29 MS: 1 InsertRepeatedBytes- 00:07:27.288 [2024-07-15 16:18:12.749817] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:fefe0afe cdw11:fefe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.288 [2024-07-15 16:18:12.749850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.288 [2024-07-15 16:18:12.749906] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:fefefefe cdw11:fefe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.288 [2024-07-15 16:18:12.749923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.288 [2024-07-15 16:18:12.749979] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:fefefefe cdw11:fefe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.288 [2024-07-15 16:18:12.749994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.288 [2024-07-15 16:18:12.750048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:fefefefe cdw11:fefe0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.288 [2024-07-15 16:18:12.750065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:27.288 #19 NEW cov: 12061 ft: 13144 corp: 4/83b lim: 35 exec/s: 0 rss: 72Mb L: 28/29 MS: 1 CopyPart- 00:07:27.288 [2024-07-15 16:18:12.789761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:fefe0afe cdw11:fe190000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.288 [2024-07-15 16:18:12.789787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.288 [2024-07-15 16:18:12.789842] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:fefefefe cdw11:fefe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.288 [2024-07-15 16:18:12.789856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.288 [2024-07-15 16:18:12.789909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:fefefefe cdw11:fefe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.288 [2024-07-15 16:18:12.789924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.288 #25 NEW cov: 12146 ft: 13490 corp: 5/108b lim: 35 exec/s: 0 rss: 72Mb L: 25/29 MS: 1 ChangeBinInt- 00:07:27.288 [2024-07-15 16:18:12.830009] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:fefe0afe cdw11:fefe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.288 [2024-07-15 16:18:12.830035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.288 [2024-07-15 16:18:12.830090] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:fefefefe cdw11:fefe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.288 [2024-07-15 16:18:12.830104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.288 [2024-07-15 16:18:12.830156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:fefefefe cdw11:3a3a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.288 [2024-07-15 16:18:12.830171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.288 [2024-07-15 16:18:12.830223] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:feff3afe cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.288 [2024-07-15 16:18:12.830237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:27.547 #26 NEW cov: 12146 ft: 13572 corp: 6/141b lim: 35 exec/s: 0 rss: 72Mb L: 33/33 MS: 1 InsertRepeatedBytes- 00:07:27.547 [2024-07-15 16:18:12.880026] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:fefe0afe cdw11:fe190000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.547 [2024-07-15 16:18:12.880052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.547 [2024-07-15 16:18:12.880108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:fefefefe cdw11:fefe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.547 [2024-07-15 16:18:12.880122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.547 [2024-07-15 16:18:12.880178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:fefefefe cdw11:fefe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.547 [2024-07-15 16:18:12.880193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.547 #27 NEW cov: 12146 ft: 13630 corp: 7/166b lim: 35 exec/s: 0 rss: 72Mb L: 25/33 MS: 1 CrossOver- 00:07:27.547 [2024-07-15 16:18:12.930346] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:fefe0afe cdw11:fefe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.547 [2024-07-15 16:18:12.930373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.547 [2024-07-15 16:18:12.930429] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:fefefefe cdw11:fefe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.547 [2024-07-15 16:18:12.930445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.547 [2024-07-15 16:18:12.930497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:fefefefe cdw11:3a3a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.547 [2024-07-15 16:18:12.930513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.547 [2024-07-15 16:18:12.930571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:fefe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.547 [2024-07-15 16:18:12.930586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:27.548 #28 NEW cov: 12146 ft: 13725 corp: 8/195b lim: 35 exec/s: 0 rss: 72Mb L: 29/33 MS: 1 CMP- DE: "\000\002\000\000"- 00:07:27.548 [2024-07-15 16:18:12.970257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:fefe0afe cdw11:fefe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.548 [2024-07-15 16:18:12.970283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.548 [2024-07-15 16:18:12.970338] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:fefefefe cdw11:fefe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.548 [2024-07-15 16:18:12.970353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.548 [2024-07-15 16:18:12.970408] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:fefefefe cdw11:fefe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.548 [2024-07-15 16:18:12.970422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.548 #29 NEW cov: 12146 ft: 13775 corp: 9/220b lim: 35 exec/s: 0 rss: 72Mb L: 25/33 MS: 1 CopyPart- 00:07:27.548 [2024-07-15 16:18:13.010251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.548 [2024-07-15 16:18:13.010278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.548 [2024-07-15 16:18:13.010333] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.548 [2024-07-15 16:18:13.010348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.548 #34 NEW cov: 12146 ft: 14052 corp: 10/238b lim: 35 exec/s: 0 rss: 72Mb L: 18/33 MS: 5 CrossOver-InsertByte-EraseBytes-CopyPart-InsertRepeatedBytes- 00:07:27.548 [2024-07-15 16:18:13.050629] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:02000a00 cdw11:00fe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.548 [2024-07-15 16:18:13.050659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.548 [2024-07-15 16:18:13.050714] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:fefefefe cdw11:fefe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.548 [2024-07-15 16:18:13.050729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.548 [2024-07-15 16:18:13.050783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:fefefefe cdw11:3a3a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.548 [2024-07-15 16:18:13.050797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.548 [2024-07-15 16:18:13.050852] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:fefe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.548 [2024-07-15 16:18:13.050865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:27.548 #35 NEW cov: 12146 ft: 14121 corp: 11/267b lim: 35 exec/s: 0 rss: 72Mb L: 29/33 MS: 1 PersAutoDict- DE: "\000\002\000\000"- 00:07:27.548 [2024-07-15 16:18:13.100644] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:fefe0afe cdw11:fe190000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.548 [2024-07-15 16:18:13.100670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.548 [2024-07-15 16:18:13.100726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:fefefefe cdw11:fefe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.548 [2024-07-15 16:18:13.100740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.548 [2024-07-15 16:18:13.100792] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:fefefefe cdw11:fefe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.548 [2024-07-15 16:18:13.100807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.820 #36 NEW cov: 12146 ft: 14152 corp: 12/292b lim: 35 exec/s: 0 rss: 72Mb L: 25/33 MS: 1 CrossOver- 00:07:27.820 [2024-07-15 16:18:13.150957] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:02000a00 cdw11:00fe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.820 [2024-07-15 16:18:13.150984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.820 [2024-07-15 16:18:13.151039] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:fefefefe cdw11:fefe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.820 [2024-07-15 16:18:13.151054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.820 [2024-07-15 16:18:13.151106] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:fefefefe cdw11:fe3a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.820 [2024-07-15 16:18:13.151120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.820 [2024-07-15 16:18:13.151172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:fefe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.820 [2024-07-15 16:18:13.151185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:27.820 #42 NEW cov: 12146 ft: 14163 corp: 13/321b lim: 35 exec/s: 0 rss: 73Mb L: 29/33 MS: 1 CopyPart- 00:07:27.820 [2024-07-15 16:18:13.201076] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:02000a00 cdw11:00fe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.820 [2024-07-15 16:18:13.201104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.820 [2024-07-15 16:18:13.201159] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:fefefefe cdw11:fefe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.820 [2024-07-15 16:18:13.201174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.820 [2024-07-15 16:18:13.201228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:fefefefe cdw11:3a3a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.820 [2024-07-15 16:18:13.201242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.820 [2024-07-15 16:18:13.201296] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:02000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.820 [2024-07-15 16:18:13.201310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:27.820 #43 NEW cov: 12146 ft: 14185 corp: 14/350b lim: 35 exec/s: 0 rss: 73Mb L: 29/33 MS: 1 PersAutoDict- DE: "\000\002\000\000"- 00:07:27.820 [2024-07-15 16:18:13.241043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:fefe0afe cdw11:fefe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.820 [2024-07-15 16:18:13.241068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.820 [2024-07-15 16:18:13.241123] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:fefefefe cdw11:fefe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.820 [2024-07-15 16:18:13.241137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.820 [2024-07-15 16:18:13.241191] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:fefefefe cdw11:fe2e0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.820 [2024-07-15 16:18:13.241205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.820 NEW_FUNC[1/1]: 0x1a7eaf0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:27.820 #44 NEW cov: 12169 ft: 14238 corp: 15/375b lim: 35 exec/s: 0 rss: 73Mb L: 25/33 MS: 1 ChangeByte- 00:07:27.820 [2024-07-15 16:18:13.281277] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:fefe0afe cdw11:fefe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.820 [2024-07-15 16:18:13.281301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.820 [2024-07-15 16:18:13.281355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:fefefefe cdw11:fefe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.820 [2024-07-15 16:18:13.281370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.820 [2024-07-15 16:18:13.281423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:fefefefe cdw11:fefe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.820 [2024-07-15 16:18:13.281438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.820 [2024-07-15 16:18:13.281490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:fefefefe cdw11:fefe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.820 [2024-07-15 16:18:13.281504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:27.820 #45 NEW cov: 12169 ft: 14256 corp: 16/408b lim: 35 exec/s: 0 rss: 73Mb L: 33/33 MS: 1 CopyPart- 00:07:27.820 [2024-07-15 16:18:13.331411] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00020a85 cdw11:00000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.820 [2024-07-15 16:18:13.331436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.820 [2024-07-15 16:18:13.331491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:fefefefe cdw11:fefe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.820 [2024-07-15 16:18:13.331505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.820 [2024-07-15 16:18:13.331564] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:fefefefe cdw11:fe3a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.820 [2024-07-15 16:18:13.331579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.820 [2024-07-15 16:18:13.331630] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:02003a00 cdw11:00020000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.820 [2024-07-15 16:18:13.331643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:27.820 #46 NEW cov: 12169 ft: 14259 corp: 17/438b lim: 35 exec/s: 46 rss: 73Mb L: 30/33 MS: 1 InsertByte- 00:07:27.820 [2024-07-15 16:18:13.381582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:02fe0a00 cdw11:fefe0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.820 [2024-07-15 16:18:13.381608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.820 [2024-07-15 16:18:13.381662] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00fefefe cdw11:fefe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.820 [2024-07-15 16:18:13.381676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.820 [2024-07-15 16:18:13.381729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:fefefefe cdw11:fe3a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.820 [2024-07-15 16:18:13.381743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.820 [2024-07-15 16:18:13.381796] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:fefe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.820 [2024-07-15 16:18:13.381810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:28.082 #47 NEW cov: 12169 ft: 14274 corp: 18/467b lim: 35 exec/s: 47 rss: 73Mb L: 29/33 MS: 1 ShuffleBytes- 00:07:28.082 [2024-07-15 16:18:13.431697] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:fefe0afe cdw11:fefe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.082 [2024-07-15 16:18:13.431722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.082 [2024-07-15 16:18:13.431778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:fe00fefe cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.082 [2024-07-15 16:18:13.431793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.082 [2024-07-15 16:18:13.431844] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:fefe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.082 [2024-07-15 16:18:13.431859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.082 [2024-07-15 16:18:13.431912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:fefefefe cdw11:fefe0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.082 [2024-07-15 16:18:13.431929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:28.082 #48 NEW cov: 12169 ft: 14289 corp: 19/500b lim: 35 exec/s: 48 rss: 73Mb L: 33/33 MS: 1 InsertRepeatedBytes- 00:07:28.082 [2024-07-15 16:18:13.481539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.082 [2024-07-15 16:18:13.481565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.082 [2024-07-15 16:18:13.481620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.082 [2024-07-15 16:18:13.481635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.082 #49 NEW cov: 12169 ft: 14310 corp: 20/518b lim: 35 exec/s: 49 rss: 73Mb L: 18/33 MS: 1 CMP- DE: "\000\000"- 00:07:28.082 [2024-07-15 16:18:13.532012] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:fefe0afe cdw11:fefe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.082 [2024-07-15 16:18:13.532037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.082 [2024-07-15 16:18:13.532092] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:fe00fefe cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.082 [2024-07-15 16:18:13.532106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.082 [2024-07-15 16:18:13.532160] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:fefe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.082 [2024-07-15 16:18:13.532174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.082 [2024-07-15 16:18:13.532228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:fefefefe cdw11:fefe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.082 [2024-07-15 16:18:13.532242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:28.082 #50 NEW cov: 12169 ft: 14316 corp: 21/552b lim: 35 exec/s: 50 rss: 73Mb L: 34/34 MS: 1 CopyPart- 00:07:28.082 [2024-07-15 16:18:13.582314] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:fefe0afe cdw11:fefe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.082 [2024-07-15 16:18:13.582339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.082 [2024-07-15 16:18:13.582393] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:fefefefe cdw11:fefe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.082 [2024-07-15 16:18:13.582408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.082 [2024-07-15 16:18:13.582463] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:fefefefe cdw11:fefe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.082 [2024-07-15 16:18:13.582478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.082 [2024-07-15 16:18:13.582534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:fefefefe cdw11:fefe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.082 [2024-07-15 16:18:13.582548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:28.082 [2024-07-15 16:18:13.582602] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:0000fefe cdw11:fefe0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.082 [2024-07-15 16:18:13.582619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:28.082 #51 NEW cov: 12169 ft: 14402 corp: 22/587b lim: 35 exec/s: 51 rss: 73Mb L: 35/35 MS: 1 PersAutoDict- DE: "\000\000"- 00:07:28.082 [2024-07-15 16:18:13.632327] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:02fe0a00 cdw11:fefe0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.082 [2024-07-15 16:18:13.632351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.082 [2024-07-15 16:18:13.632408] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00fefefe cdw11:feec0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.082 [2024-07-15 16:18:13.632423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.082 [2024-07-15 16:18:13.632475] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:fefefefe cdw11:fefe0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.082 [2024-07-15 16:18:13.632490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.082 [2024-07-15 16:18:13.632544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:02003a00 cdw11:00fe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.082 [2024-07-15 16:18:13.632558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:28.341 #52 NEW cov: 12169 ft: 14421 corp: 23/617b lim: 35 exec/s: 52 rss: 73Mb L: 30/35 MS: 1 InsertByte- 00:07:28.341 [2024-07-15 16:18:13.682297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:fefe0afe cdw11:fe320003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.341 [2024-07-15 16:18:13.682323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.341 [2024-07-15 16:18:13.682377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:fefefefe cdw11:fefe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.341 [2024-07-15 16:18:13.682391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.341 [2024-07-15 16:18:13.682444] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:fefefefe cdw11:fefe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.341 [2024-07-15 16:18:13.682459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.341 #53 NEW cov: 12169 ft: 14431 corp: 24/642b lim: 35 exec/s: 53 rss: 73Mb L: 25/35 MS: 1 ChangeByte- 00:07:28.341 [2024-07-15 16:18:13.722564] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00020a85 cdw11:00000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.341 [2024-07-15 16:18:13.722589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.341 [2024-07-15 16:18:13.722644] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:fefefefe cdw11:fe000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.341 [2024-07-15 16:18:13.722658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.341 [2024-07-15 16:18:13.722711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:fefefefe cdw11:fe3a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.341 [2024-07-15 16:18:13.722726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.341 [2024-07-15 16:18:13.722783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:02003a00 cdw11:00020000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.341 [2024-07-15 16:18:13.722799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:28.341 #54 NEW cov: 12169 ft: 14467 corp: 25/672b lim: 35 exec/s: 54 rss: 73Mb L: 30/35 MS: 1 PersAutoDict- DE: "\000\000"- 00:07:28.341 [2024-07-15 16:18:13.772376] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:52520a52 cdw11:52520002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.341 [2024-07-15 16:18:13.772403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.341 [2024-07-15 16:18:13.772456] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:52525252 cdw11:52520002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.341 [2024-07-15 16:18:13.772472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.341 #57 NEW cov: 12169 ft: 14485 corp: 26/692b lim: 35 exec/s: 57 rss: 73Mb L: 20/35 MS: 3 ShuffleBytes-CopyPart-InsertRepeatedBytes- 00:07:28.341 [2024-07-15 16:18:13.812770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:fefe0afe cdw11:fefe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.341 [2024-07-15 16:18:13.812796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.341 [2024-07-15 16:18:13.812850] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:fe00fefe cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.341 [2024-07-15 16:18:13.812865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.341 [2024-07-15 16:18:13.812917] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:fefe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.341 [2024-07-15 16:18:13.812931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.341 [2024-07-15 16:18:13.812983] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:fefefefe cdw11:fefe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.341 [2024-07-15 16:18:13.812998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:28.341 #58 NEW cov: 12169 ft: 14493 corp: 27/725b lim: 35 exec/s: 58 rss: 73Mb L: 33/35 MS: 1 ShuffleBytes- 00:07:28.341 [2024-07-15 16:18:13.852923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:fefe0afe cdw11:fefe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.341 [2024-07-15 16:18:13.852949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.341 [2024-07-15 16:18:13.853006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:fefefefe cdw11:fefe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.341 [2024-07-15 16:18:13.853021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.341 [2024-07-15 16:18:13.853073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:fefefe0a cdw11:fefe0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.341 [2024-07-15 16:18:13.853088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.341 [2024-07-15 16:18:13.853140] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:fefe00fe cdw11:fefe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.341 [2024-07-15 16:18:13.853153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:28.341 #59 NEW cov: 12169 ft: 14509 corp: 28/756b lim: 35 exec/s: 59 rss: 73Mb L: 31/35 MS: 1 CrossOver- 00:07:28.341 [2024-07-15 16:18:13.892732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.341 [2024-07-15 16:18:13.892757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.341 [2024-07-15 16:18:13.892811] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ff400003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.341 [2024-07-15 16:18:13.892826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.600 #60 NEW cov: 12169 ft: 14579 corp: 29/775b lim: 35 exec/s: 60 rss: 73Mb L: 19/35 MS: 1 InsertByte- 00:07:28.600 [2024-07-15 16:18:13.943000] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.600 [2024-07-15 16:18:13.943027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.600 [2024-07-15 16:18:13.943080] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:0000ff06 cdw11:00ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.600 [2024-07-15 16:18:13.943095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.600 [2024-07-15 16:18:13.943148] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.600 [2024-07-15 16:18:13.943161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.600 #61 NEW cov: 12169 ft: 14595 corp: 30/797b lim: 35 exec/s: 61 rss: 73Mb L: 22/35 MS: 1 CMP- DE: "\006\000\000\000"- 00:07:28.600 [2024-07-15 16:18:13.983296] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:02fe0a00 cdw11:fefe0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.600 [2024-07-15 16:18:13.983323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.600 [2024-07-15 16:18:13.983379] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00fefefe cdw11:feec0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.600 [2024-07-15 16:18:13.983394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.600 [2024-07-15 16:18:13.983450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:fefefefe cdw11:fefe0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.600 [2024-07-15 16:18:13.983466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.600 [2024-07-15 16:18:13.983519] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00023a4f cdw11:00000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.600 [2024-07-15 16:18:13.983538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:28.600 #62 NEW cov: 12169 ft: 14633 corp: 31/828b lim: 35 exec/s: 62 rss: 74Mb L: 31/35 MS: 1 InsertByte- 00:07:28.600 [2024-07-15 16:18:14.033283] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:b7fe0afe cdw11:fe320003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.600 [2024-07-15 16:18:14.033310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.600 [2024-07-15 16:18:14.033365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:fefefefe cdw11:fefe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.600 [2024-07-15 16:18:14.033381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.600 [2024-07-15 16:18:14.033440] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:fefefefe cdw11:fefe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.600 [2024-07-15 16:18:14.033455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.600 #63 NEW cov: 12169 ft: 14640 corp: 32/853b lim: 35 exec/s: 63 rss: 74Mb L: 25/35 MS: 1 ChangeByte- 00:07:28.600 [2024-07-15 16:18:14.083596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.600 [2024-07-15 16:18:14.083623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.600 [2024-07-15 16:18:14.083676] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.600 [2024-07-15 16:18:14.083690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.600 [2024-07-15 16:18:14.083743] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffff40 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.600 [2024-07-15 16:18:14.083757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.600 [2024-07-15 16:18:14.083808] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffff40ff cdw11:ff000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.600 [2024-07-15 16:18:14.083824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:28.600 #64 NEW cov: 12169 ft: 14650 corp: 33/881b lim: 35 exec/s: 64 rss: 74Mb L: 28/35 MS: 1 CopyPart- 00:07:28.600 [2024-07-15 16:18:14.133738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:02fe0a00 cdw11:fefe0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.600 [2024-07-15 16:18:14.133764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.600 [2024-07-15 16:18:14.133816] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00fefefe cdw11:feec0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.600 [2024-07-15 16:18:14.133832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.600 [2024-07-15 16:18:14.133884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:fefefefe cdw11:fefe0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.600 [2024-07-15 16:18:14.133898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.600 [2024-07-15 16:18:14.133953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:02003a00 cdw11:00fe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.600 [2024-07-15 16:18:14.133967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:28.600 #65 NEW cov: 12169 ft: 14655 corp: 34/913b lim: 35 exec/s: 65 rss: 74Mb L: 32/35 MS: 1 CopyPart- 00:07:28.600 [2024-07-15 16:18:14.173831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:02fe0a00 cdw11:fefe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.600 [2024-07-15 16:18:14.173859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.600 [2024-07-15 16:18:14.173914] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00fefe00 cdw11:fefe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.600 [2024-07-15 16:18:14.173930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.600 [2024-07-15 16:18:14.173987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:fefefefe cdw11:fefe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.600 [2024-07-15 16:18:14.174002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.600 [2024-07-15 16:18:14.174056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:3a3afefe cdw11:3a000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.600 [2024-07-15 16:18:14.174069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:28.860 #66 NEW cov: 12169 ft: 14664 corp: 35/947b lim: 35 exec/s: 66 rss: 74Mb L: 34/35 MS: 1 CopyPart- 00:07:28.860 [2024-07-15 16:18:14.213950] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:0a000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.860 [2024-07-15 16:18:14.213975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.860 [2024-07-15 16:18:14.214030] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:fe00fefe cdw11:fefe0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.860 [2024-07-15 16:18:14.214044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.860 [2024-07-15 16:18:14.214098] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:fefefefe cdw11:fefe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.860 [2024-07-15 16:18:14.214112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.860 [2024-07-15 16:18:14.214163] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:3a3afefe cdw11:00020000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.860 [2024-07-15 16:18:14.214176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:28.860 #67 NEW cov: 12169 ft: 14682 corp: 36/980b lim: 35 exec/s: 67 rss: 74Mb L: 33/35 MS: 1 InsertRepeatedBytes- 00:07:28.860 [2024-07-15 16:18:14.254167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:fefe0afe cdw11:fefe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.860 [2024-07-15 16:18:14.254192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.860 [2024-07-15 16:18:14.254247] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:fefefefe cdw11:fefe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.860 [2024-07-15 16:18:14.254262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.860 [2024-07-15 16:18:14.254315] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:fefefefe cdw11:3a3a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.860 [2024-07-15 16:18:14.254329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.860 [2024-07-15 16:18:14.254381] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:feff3afe cdw11:fffe0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.860 [2024-07-15 16:18:14.254394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:28.860 [2024-07-15 16:18:14.254446] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:fefeffff cdw11:fefe0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.860 [2024-07-15 16:18:14.254460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:28.860 #68 NEW cov: 12169 ft: 14693 corp: 37/1015b lim: 35 exec/s: 68 rss: 74Mb L: 35/35 MS: 1 CopyPart- 00:07:28.860 [2024-07-15 16:18:14.304150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:fefe0afe cdw11:fefe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.860 [2024-07-15 16:18:14.304175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.860 [2024-07-15 16:18:14.304232] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:fefefefe cdw11:fefe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.860 [2024-07-15 16:18:14.304246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.860 [2024-07-15 16:18:14.304302] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:fefefefe cdw11:fefe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.860 [2024-07-15 16:18:14.304317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.860 [2024-07-15 16:18:14.304370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:fefefefe cdw11:fefe0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.860 [2024-07-15 16:18:14.304384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:28.860 #69 NEW cov: 12169 ft: 14768 corp: 38/1043b lim: 35 exec/s: 69 rss: 74Mb L: 28/35 MS: 1 EraseBytes- 00:07:28.860 [2024-07-15 16:18:14.353992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:52520a52 cdw11:52520002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.860 [2024-07-15 16:18:14.354017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.860 [2024-07-15 16:18:14.354072] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:52525252 cdw11:52520002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.860 [2024-07-15 16:18:14.354086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.860 #70 NEW cov: 12169 ft: 14784 corp: 39/1063b lim: 35 exec/s: 35 rss: 74Mb L: 20/35 MS: 1 ChangeByte- 00:07:28.860 #70 DONE cov: 12169 ft: 14784 corp: 39/1063b lim: 35 exec/s: 35 rss: 74Mb 00:07:28.860 ###### Recommended dictionary. ###### 00:07:28.860 "\000\002\000\000" # Uses: 2 00:07:28.860 "\000\000" # Uses: 2 00:07:28.860 "\006\000\000\000" # Uses: 0 00:07:28.860 ###### End of recommended dictionary. ###### 00:07:28.860 Done 70 runs in 2 second(s) 00:07:29.145 16:18:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_4.conf /var/tmp/suppress_nvmf_fuzz 00:07:29.145 16:18:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:29.145 16:18:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:29.145 16:18:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:07:29.145 16:18:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=5 00:07:29.145 16:18:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:29.145 16:18:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:29.145 16:18:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:07:29.145 16:18:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_5.conf 00:07:29.145 16:18:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:29.145 16:18:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:29.145 16:18:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 5 00:07:29.145 16:18:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4405 00:07:29.145 16:18:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:07:29.145 16:18:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' 00:07:29.145 16:18:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4405"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:29.145 16:18:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:29.145 16:18:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:29.145 16:18:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' -c /tmp/fuzz_json_5.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 -Z 5 00:07:29.145 [2024-07-15 16:18:14.567207] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:07:29.145 [2024-07-15 16:18:14.567275] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1516645 ] 00:07:29.145 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.404 [2024-07-15 16:18:14.757491] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.404 [2024-07-15 16:18:14.830306] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.404 [2024-07-15 16:18:14.889774] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:29.404 [2024-07-15 16:18:14.905975] tcp.c: 993:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4405 *** 00:07:29.404 INFO: Running with entropic power schedule (0xFF, 100). 00:07:29.404 INFO: Seed: 3664598178 00:07:29.404 INFO: Loaded 1 modules (357850 inline 8-bit counters): 357850 [0x29ab30c, 0x2a028e6), 00:07:29.404 INFO: Loaded 1 PC tables (357850 PCs): 357850 [0x2a028e8,0x2f78688), 00:07:29.404 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:07:29.404 INFO: A corpus is not provided, starting from an empty corpus 00:07:29.404 #2 INITED exec/s: 0 rss: 65Mb 00:07:29.404 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:29.404 This may also happen if the target rejected all inputs we tried so far 00:07:29.404 [2024-07-15 16:18:14.961344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:b1b1b1b1 cdw11:b1b10005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.404 [2024-07-15 16:18:14.961375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.919 NEW_FUNC[1/698]: 0x48c180 in fuzz_admin_create_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:142 00:07:29.919 NEW_FUNC[2/698]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:29.919 #8 NEW cov: 11935 ft: 11937 corp: 2/12b lim: 45 exec/s: 0 rss: 71Mb L: 11/11 MS: 1 InsertRepeatedBytes- 00:07:29.919 [2024-07-15 16:18:15.302206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:b1b1b1b1 cdw11:b1b10001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.919 [2024-07-15 16:18:15.302249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.919 #9 NEW cov: 12066 ft: 12594 corp: 3/24b lim: 45 exec/s: 0 rss: 72Mb L: 12/12 MS: 1 InsertByte- 00:07:29.919 [2024-07-15 16:18:15.352401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:3d3d3d3d cdw11:3d3d0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.919 [2024-07-15 16:18:15.352430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.919 [2024-07-15 16:18:15.352486] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:3d3d3d3d cdw11:3d3d0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.919 [2024-07-15 16:18:15.352500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.919 #15 NEW cov: 12072 ft: 13506 corp: 4/47b lim: 45 exec/s: 0 rss: 72Mb L: 23/23 MS: 1 InsertRepeatedBytes- 00:07:29.919 [2024-07-15 16:18:15.392538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:b1b1b1b1 cdw11:b1b10005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.919 [2024-07-15 16:18:15.392565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.919 [2024-07-15 16:18:15.392619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:0ab1b1b1 cdw11:b1b10005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.919 [2024-07-15 16:18:15.392633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.919 #16 NEW cov: 12157 ft: 13746 corp: 5/70b lim: 45 exec/s: 0 rss: 72Mb L: 23/23 MS: 1 CrossOver- 00:07:29.919 [2024-07-15 16:18:15.442485] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:b1b1b1b1 cdw11:b1b10005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.919 [2024-07-15 16:18:15.442512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.919 #17 NEW cov: 12157 ft: 13889 corp: 6/81b lim: 45 exec/s: 0 rss: 72Mb L: 11/23 MS: 1 ChangeByte- 00:07:29.919 [2024-07-15 16:18:15.482656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:b124b125 cdw11:b1b10005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.919 [2024-07-15 16:18:15.482683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.177 #19 NEW cov: 12157 ft: 13976 corp: 7/90b lim: 45 exec/s: 0 rss: 72Mb L: 9/23 MS: 2 EraseBytes-InsertByte- 00:07:30.177 [2024-07-15 16:18:15.522882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:b1b1b1b1 cdw11:b1b10005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.177 [2024-07-15 16:18:15.522907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.177 [2024-07-15 16:18:15.522961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:0ab1b1b1 cdw11:b1240005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.177 [2024-07-15 16:18:15.522974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.177 #20 NEW cov: 12157 ft: 14066 corp: 8/113b lim: 45 exec/s: 0 rss: 72Mb L: 23/23 MS: 1 ShuffleBytes- 00:07:30.177 [2024-07-15 16:18:15.572882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:b1b1b1b1 cdw11:b1b10005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.177 [2024-07-15 16:18:15.572907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.177 #21 NEW cov: 12157 ft: 14117 corp: 9/124b lim: 45 exec/s: 0 rss: 72Mb L: 11/23 MS: 1 ChangeBinInt- 00:07:30.177 [2024-07-15 16:18:15.613402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:b1ffb125 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.177 [2024-07-15 16:18:15.613429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.178 [2024-07-15 16:18:15.613484] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.178 [2024-07-15 16:18:15.613499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.178 [2024-07-15 16:18:15.613554] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.178 [2024-07-15 16:18:15.613570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.178 [2024-07-15 16:18:15.613626] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.178 [2024-07-15 16:18:15.613641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:30.178 #22 NEW cov: 12157 ft: 14529 corp: 10/163b lim: 45 exec/s: 0 rss: 72Mb L: 39/39 MS: 1 InsertRepeatedBytes- 00:07:30.178 [2024-07-15 16:18:15.663083] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:b1b1b1b1 cdw11:b1b10005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.178 [2024-07-15 16:18:15.663109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.178 #23 NEW cov: 12157 ft: 14574 corp: 11/174b lim: 45 exec/s: 0 rss: 72Mb L: 11/39 MS: 1 ChangeBinInt- 00:07:30.178 [2024-07-15 16:18:15.713236] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:b1b1b1b1 cdw11:b1b10005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.178 [2024-07-15 16:18:15.713273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.178 #24 NEW cov: 12157 ft: 14607 corp: 12/185b lim: 45 exec/s: 0 rss: 72Mb L: 11/39 MS: 1 ChangeByte- 00:07:30.436 [2024-07-15 16:18:15.763409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:b1b1bcb1 cdw11:b1b10005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.436 [2024-07-15 16:18:15.763435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.436 #25 NEW cov: 12157 ft: 14675 corp: 13/196b lim: 45 exec/s: 0 rss: 72Mb L: 11/39 MS: 1 ChangeByte- 00:07:30.436 [2024-07-15 16:18:15.803619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:b11bb1b1 cdw11:1b1b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.436 [2024-07-15 16:18:15.803644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.436 [2024-07-15 16:18:15.803699] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:b1b11b1b cdw11:b1b10005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.436 [2024-07-15 16:18:15.803712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.436 #26 NEW cov: 12157 ft: 14699 corp: 14/215b lim: 45 exec/s: 0 rss: 72Mb L: 19/39 MS: 1 InsertRepeatedBytes- 00:07:30.436 [2024-07-15 16:18:15.843949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:b1b1b1b1 cdw11:b1b10005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.436 [2024-07-15 16:18:15.843975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.436 [2024-07-15 16:18:15.844030] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:fff5ffff cdw11:b1240005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.436 [2024-07-15 16:18:15.844044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.436 NEW_FUNC[1/1]: 0x1a7eaf0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:30.436 #27 NEW cov: 12180 ft: 14811 corp: 15/238b lim: 45 exec/s: 0 rss: 73Mb L: 23/39 MS: 1 CMP- DE: "\377\377\377\365"- 00:07:30.436 [2024-07-15 16:18:15.894046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:b1b1b1b1 cdw11:b1b10005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.436 [2024-07-15 16:18:15.894071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.436 [2024-07-15 16:18:15.894126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:fff5ffff cdw11:b1240005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.436 [2024-07-15 16:18:15.894143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.436 [2024-07-15 16:18:15.894196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:b1b1b1b1 cdw11:b1b10005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.436 [2024-07-15 16:18:15.894211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.436 #28 NEW cov: 12180 ft: 15072 corp: 16/270b lim: 45 exec/s: 0 rss: 73Mb L: 32/39 MS: 1 CopyPart- 00:07:30.436 [2024-07-15 16:18:15.943880] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:b1b1b1b1 cdw11:a1b10005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.436 [2024-07-15 16:18:15.943905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.436 #29 NEW cov: 12180 ft: 15106 corp: 17/281b lim: 45 exec/s: 29 rss: 73Mb L: 11/39 MS: 1 ChangeBit- 00:07:30.436 [2024-07-15 16:18:15.994023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:21b1b1b1 cdw11:b1b10005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.436 [2024-07-15 16:18:15.994049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.694 #30 NEW cov: 12180 ft: 15135 corp: 18/292b lim: 45 exec/s: 30 rss: 73Mb L: 11/39 MS: 1 ChangeByte- 00:07:30.694 [2024-07-15 16:18:16.034469] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:b1b1b1b1 cdw11:b1b10005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.694 [2024-07-15 16:18:16.034494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.694 [2024-07-15 16:18:16.034551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:fff5ffff cdw11:b1240005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.694 [2024-07-15 16:18:16.034566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.694 [2024-07-15 16:18:16.034620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:b1b1b1b1 cdw11:b1b10005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.694 [2024-07-15 16:18:16.034635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.694 #31 NEW cov: 12180 ft: 15141 corp: 19/324b lim: 45 exec/s: 31 rss: 73Mb L: 32/39 MS: 1 ShuffleBytes- 00:07:30.694 [2024-07-15 16:18:16.084270] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:25b10ab1 cdw11:24b10005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.694 [2024-07-15 16:18:16.084297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.694 #33 NEW cov: 12180 ft: 15155 corp: 20/334b lim: 45 exec/s: 33 rss: 73Mb L: 10/39 MS: 2 ShuffleBytes-CrossOver- 00:07:30.694 [2024-07-15 16:18:16.124536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:b1b1b1b1 cdw11:b1b10005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.694 [2024-07-15 16:18:16.124563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.694 [2024-07-15 16:18:16.124617] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:fff5ffff cdw11:b1240005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.694 [2024-07-15 16:18:16.124631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.694 #34 NEW cov: 12180 ft: 15160 corp: 21/355b lim: 45 exec/s: 34 rss: 73Mb L: 21/39 MS: 1 EraseBytes- 00:07:30.694 [2024-07-15 16:18:16.164657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:14b1b1b1 cdw11:b1b10005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.694 [2024-07-15 16:18:16.164683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.694 [2024-07-15 16:18:16.164741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffb1ff cdw11:f5b10001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.694 [2024-07-15 16:18:16.164754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.694 #35 NEW cov: 12180 ft: 15214 corp: 22/379b lim: 45 exec/s: 35 rss: 73Mb L: 24/39 MS: 1 InsertByte- 00:07:30.694 [2024-07-15 16:18:16.205065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:b1b1b1b1 cdw11:b1b10005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.694 [2024-07-15 16:18:16.205090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.694 [2024-07-15 16:18:16.205145] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:fff5ffff cdw11:b1240005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.694 [2024-07-15 16:18:16.205160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.694 [2024-07-15 16:18:16.205210] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:b1b1b1b1 cdw11:e0e00007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.694 [2024-07-15 16:18:16.205224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.694 [2024-07-15 16:18:16.205275] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:e0e0e0e0 cdw11:e0e00007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.694 [2024-07-15 16:18:16.205288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:30.694 #36 NEW cov: 12180 ft: 15222 corp: 23/420b lim: 45 exec/s: 36 rss: 73Mb L: 41/41 MS: 1 InsertRepeatedBytes- 00:07:30.694 [2024-07-15 16:18:16.244659] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:b1b1b1b1 cdw11:b1b10001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.694 [2024-07-15 16:18:16.244685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.694 #37 NEW cov: 12180 ft: 15241 corp: 24/432b lim: 45 exec/s: 37 rss: 73Mb L: 12/41 MS: 1 ChangeBit- 00:07:31.007 [2024-07-15 16:18:16.284805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:b1b1bcb1 cdw11:b1b10005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.007 [2024-07-15 16:18:16.284830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.007 #38 NEW cov: 12180 ft: 15257 corp: 25/445b lim: 45 exec/s: 38 rss: 73Mb L: 13/41 MS: 1 CopyPart- 00:07:31.007 [2024-07-15 16:18:16.334955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:b1b1b1b5 cdw11:b1b10005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.007 [2024-07-15 16:18:16.334980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.007 #39 NEW cov: 12180 ft: 15282 corp: 26/456b lim: 45 exec/s: 39 rss: 73Mb L: 11/41 MS: 1 ChangeBit- 00:07:31.007 [2024-07-15 16:18:16.375248] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:3d3d3d3d cdw11:3d3d0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.007 [2024-07-15 16:18:16.375273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.007 [2024-07-15 16:18:16.375329] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:3d3d3d3d cdw11:3d3d0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.007 [2024-07-15 16:18:16.375343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.007 #40 NEW cov: 12180 ft: 15317 corp: 27/479b lim: 45 exec/s: 40 rss: 73Mb L: 23/41 MS: 1 ShuffleBytes- 00:07:31.007 [2024-07-15 16:18:16.425693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:b1b1b1b1 cdw11:b1b10004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.007 [2024-07-15 16:18:16.425718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.007 [2024-07-15 16:18:16.425774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:97979797 cdw11:97970004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.007 [2024-07-15 16:18:16.425789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.007 [2024-07-15 16:18:16.425843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:97979797 cdw11:97970004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.007 [2024-07-15 16:18:16.425857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.007 [2024-07-15 16:18:16.425909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:97979797 cdw11:97970004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.007 [2024-07-15 16:18:16.425922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:31.007 #41 NEW cov: 12180 ft: 15390 corp: 28/522b lim: 45 exec/s: 41 rss: 73Mb L: 43/43 MS: 1 InsertRepeatedBytes- 00:07:31.007 [2024-07-15 16:18:16.475356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:b1b1b1b1 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.007 [2024-07-15 16:18:16.475383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.007 #42 NEW cov: 12180 ft: 15405 corp: 29/533b lim: 45 exec/s: 42 rss: 73Mb L: 11/43 MS: 1 PersAutoDict- DE: "\377\377\377\365"- 00:07:31.007 [2024-07-15 16:18:16.515976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:b1ffb125 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.007 [2024-07-15 16:18:16.516004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.007 [2024-07-15 16:18:16.516061] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:54ff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.007 [2024-07-15 16:18:16.516076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.007 [2024-07-15 16:18:16.516130] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.007 [2024-07-15 16:18:16.516145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.007 [2024-07-15 16:18:16.516200] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.007 [2024-07-15 16:18:16.516214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:31.007 #43 NEW cov: 12180 ft: 15426 corp: 30/572b lim: 45 exec/s: 43 rss: 73Mb L: 39/43 MS: 1 ChangeByte- 00:07:31.007 [2024-07-15 16:18:16.565673] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:b1b1b125 cdw11:b1240005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.007 [2024-07-15 16:18:16.565699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.265 #44 NEW cov: 12180 ft: 15437 corp: 31/583b lim: 45 exec/s: 44 rss: 73Mb L: 11/43 MS: 1 CopyPart- 00:07:31.265 [2024-07-15 16:18:16.606042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.266 [2024-07-15 16:18:16.606072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.266 [2024-07-15 16:18:16.606123] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.266 [2024-07-15 16:18:16.606137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.266 [2024-07-15 16:18:16.606188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.266 [2024-07-15 16:18:16.606202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.266 #45 NEW cov: 12180 ft: 15445 corp: 32/618b lim: 45 exec/s: 45 rss: 73Mb L: 35/43 MS: 1 InsertRepeatedBytes- 00:07:31.266 [2024-07-15 16:18:16.645977] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:b11bb1b1 cdw11:1b1b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.266 [2024-07-15 16:18:16.646003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.266 [2024-07-15 16:18:16.646056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:b1b11b1b cdw11:b1b10005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.266 [2024-07-15 16:18:16.646071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.266 #46 NEW cov: 12180 ft: 15449 corp: 33/637b lim: 45 exec/s: 46 rss: 73Mb L: 19/43 MS: 1 ShuffleBytes- 00:07:31.266 [2024-07-15 16:18:16.696134] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:b1b1b1b1 cdw11:b1b10005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.266 [2024-07-15 16:18:16.696161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.266 [2024-07-15 16:18:16.696213] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:0ab1b1b1 cdw11:b1240005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.266 [2024-07-15 16:18:16.696228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.266 #47 NEW cov: 12180 ft: 15466 corp: 34/660b lim: 45 exec/s: 47 rss: 73Mb L: 23/43 MS: 1 ChangeBinInt- 00:07:31.266 [2024-07-15 16:18:16.736112] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:25b10ab1 cdw11:24b10005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.266 [2024-07-15 16:18:16.736139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.266 #48 NEW cov: 12180 ft: 15515 corp: 35/670b lim: 45 exec/s: 48 rss: 73Mb L: 10/43 MS: 1 ShuffleBytes- 00:07:31.266 [2024-07-15 16:18:16.786400] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:b1b1a1b1 cdw11:b1b10005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.266 [2024-07-15 16:18:16.786427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.266 [2024-07-15 16:18:16.786477] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:0ab1b1b1 cdw11:b1240005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.266 [2024-07-15 16:18:16.786491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.266 #49 NEW cov: 12180 ft: 15523 corp: 36/693b lim: 45 exec/s: 49 rss: 74Mb L: 23/43 MS: 1 ChangeBit- 00:07:31.266 [2024-07-15 16:18:16.836381] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:b1b1b1b1 cdw11:b1b10006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.266 [2024-07-15 16:18:16.836407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.524 #50 NEW cov: 12180 ft: 15540 corp: 37/704b lim: 45 exec/s: 50 rss: 74Mb L: 11/43 MS: 1 ChangeByte- 00:07:31.525 [2024-07-15 16:18:16.886536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:b1b1b1b1 cdw11:b1b10005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.525 [2024-07-15 16:18:16.886563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.525 #51 NEW cov: 12180 ft: 15546 corp: 38/715b lim: 45 exec/s: 51 rss: 74Mb L: 11/43 MS: 1 ShuffleBytes- 00:07:31.525 [2024-07-15 16:18:16.926665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:25b10ab1 cdw11:24b10005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.525 [2024-07-15 16:18:16.926690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.525 #52 NEW cov: 12180 ft: 15582 corp: 39/725b lim: 45 exec/s: 26 rss: 74Mb L: 10/43 MS: 1 CrossOver- 00:07:31.525 #52 DONE cov: 12180 ft: 15582 corp: 39/725b lim: 45 exec/s: 26 rss: 74Mb 00:07:31.525 ###### Recommended dictionary. ###### 00:07:31.525 "\377\377\377\365" # Uses: 1 00:07:31.525 ###### End of recommended dictionary. ###### 00:07:31.525 Done 52 runs in 2 second(s) 00:07:31.525 16:18:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_5.conf /var/tmp/suppress_nvmf_fuzz 00:07:31.525 16:18:17 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:31.525 16:18:17 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:31.525 16:18:17 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:07:31.525 16:18:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=6 00:07:31.525 16:18:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:31.525 16:18:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:31.525 16:18:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:07:31.525 16:18:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_6.conf 00:07:31.525 16:18:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:31.525 16:18:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:31.525 16:18:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 6 00:07:31.525 16:18:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4406 00:07:31.525 16:18:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:07:31.525 16:18:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' 00:07:31.525 16:18:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4406"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:31.783 16:18:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:31.783 16:18:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:31.783 16:18:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' -c /tmp/fuzz_json_6.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 -Z 6 00:07:31.783 [2024-07-15 16:18:17.132271] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:07:31.783 [2024-07-15 16:18:17.132349] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1517012 ] 00:07:31.783 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.783 [2024-07-15 16:18:17.325649] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.042 [2024-07-15 16:18:17.398144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.042 [2024-07-15 16:18:17.457589] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:32.042 [2024-07-15 16:18:17.473788] tcp.c: 993:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4406 *** 00:07:32.042 INFO: Running with entropic power schedule (0xFF, 100). 00:07:32.042 INFO: Seed: 1938632496 00:07:32.042 INFO: Loaded 1 modules (357850 inline 8-bit counters): 357850 [0x29ab30c, 0x2a028e6), 00:07:32.042 INFO: Loaded 1 PC tables (357850 PCs): 357850 [0x2a028e8,0x2f78688), 00:07:32.042 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:07:32.042 INFO: A corpus is not provided, starting from an empty corpus 00:07:32.042 #2 INITED exec/s: 0 rss: 64Mb 00:07:32.042 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:32.042 This may also happen if the target rejected all inputs we tried so far 00:07:32.042 [2024-07-15 16:18:17.519078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a40 cdw11:00000000 00:07:32.042 [2024-07-15 16:18:17.519107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.299 NEW_FUNC[1/696]: 0x48e990 in fuzz_admin_delete_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:161 00:07:32.299 NEW_FUNC[2/696]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:32.299 #3 NEW cov: 11853 ft: 11850 corp: 2/3b lim: 10 exec/s: 0 rss: 72Mb L: 2/2 MS: 1 InsertByte- 00:07:32.300 [2024-07-15 16:18:17.859959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a40 cdw11:00000000 00:07:32.300 [2024-07-15 16:18:17.860001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.558 #4 NEW cov: 11983 ft: 12373 corp: 3/5b lim: 10 exec/s: 0 rss: 72Mb L: 2/2 MS: 1 ShuffleBytes- 00:07:32.558 [2024-07-15 16:18:17.910025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:32.558 [2024-07-15 16:18:17.910054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.558 #5 NEW cov: 11989 ft: 12808 corp: 4/8b lim: 10 exec/s: 0 rss: 72Mb L: 3/3 MS: 1 CrossOver- 00:07:32.558 [2024-07-15 16:18:17.950281] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:32.558 [2024-07-15 16:18:17.950307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.558 [2024-07-15 16:18:17.950360] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:32.558 [2024-07-15 16:18:17.950375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.558 #7 NEW cov: 12074 ft: 13187 corp: 5/13b lim: 10 exec/s: 0 rss: 72Mb L: 5/5 MS: 2 ChangeBinInt-InsertRepeatedBytes- 00:07:32.558 [2024-07-15 16:18:17.990353] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:32.558 [2024-07-15 16:18:17.990380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.558 [2024-07-15 16:18:17.990434] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:32.558 [2024-07-15 16:18:17.990448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.558 #8 NEW cov: 12074 ft: 13256 corp: 6/18b lim: 10 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 CrossOver- 00:07:32.558 [2024-07-15 16:18:18.040404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a40 cdw11:00000000 00:07:32.558 [2024-07-15 16:18:18.040433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.558 #9 NEW cov: 12074 ft: 13386 corp: 7/20b lim: 10 exec/s: 0 rss: 72Mb L: 2/5 MS: 1 EraseBytes- 00:07:32.558 [2024-07-15 16:18:18.090495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00009c40 cdw11:00000000 00:07:32.558 [2024-07-15 16:18:18.090521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.558 #10 NEW cov: 12074 ft: 13450 corp: 8/22b lim: 10 exec/s: 0 rss: 72Mb L: 2/5 MS: 1 ChangeByte- 00:07:32.817 [2024-07-15 16:18:18.140707] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00009c40 cdw11:00000000 00:07:32.817 [2024-07-15 16:18:18.140735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.817 #11 NEW cov: 12074 ft: 13504 corp: 9/24b lim: 10 exec/s: 0 rss: 72Mb L: 2/5 MS: 1 ShuffleBytes- 00:07:32.817 [2024-07-15 16:18:18.190807] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002d40 cdw11:00000000 00:07:32.817 [2024-07-15 16:18:18.190833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.817 #13 NEW cov: 12074 ft: 13555 corp: 10/26b lim: 10 exec/s: 0 rss: 73Mb L: 2/5 MS: 2 EraseBytes-InsertByte- 00:07:32.817 [2024-07-15 16:18:18.240907] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002d40 cdw11:00000000 00:07:32.817 [2024-07-15 16:18:18.240931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.817 #14 NEW cov: 12074 ft: 13621 corp: 11/28b lim: 10 exec/s: 0 rss: 73Mb L: 2/5 MS: 1 ShuffleBytes- 00:07:32.817 [2024-07-15 16:18:18.291399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000004c cdw11:00000000 00:07:32.817 [2024-07-15 16:18:18.291425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.817 [2024-07-15 16:18:18.291478] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00004c4c cdw11:00000000 00:07:32.817 [2024-07-15 16:18:18.291492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.817 [2024-07-15 16:18:18.291545] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00004c00 cdw11:00000000 00:07:32.817 [2024-07-15 16:18:18.291560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.817 [2024-07-15 16:18:18.291613] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:32.817 [2024-07-15 16:18:18.291627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:32.817 #15 NEW cov: 12074 ft: 13925 corp: 12/37b lim: 10 exec/s: 0 rss: 73Mb L: 9/9 MS: 1 InsertRepeatedBytes- 00:07:32.817 [2024-07-15 16:18:18.331272] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00001313 cdw11:00000000 00:07:32.817 [2024-07-15 16:18:18.331298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.817 [2024-07-15 16:18:18.331352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00001313 cdw11:00000000 00:07:32.817 [2024-07-15 16:18:18.331366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.817 #18 NEW cov: 12074 ft: 13987 corp: 13/42b lim: 10 exec/s: 0 rss: 73Mb L: 5/9 MS: 3 EraseBytes-ChangeBit-InsertRepeatedBytes- 00:07:32.817 [2024-07-15 16:18:18.381299] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:32.817 [2024-07-15 16:18:18.381327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.076 NEW_FUNC[1/1]: 0x1a7eaf0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:33.076 #19 NEW cov: 12097 ft: 13993 corp: 14/45b lim: 10 exec/s: 0 rss: 73Mb L: 3/9 MS: 1 EraseBytes- 00:07:33.076 [2024-07-15 16:18:18.421428] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:33.076 [2024-07-15 16:18:18.421454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.076 #20 NEW cov: 12097 ft: 14066 corp: 15/48b lim: 10 exec/s: 0 rss: 73Mb L: 3/9 MS: 1 ChangeBit- 00:07:33.076 [2024-07-15 16:18:18.461535] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:33.076 [2024-07-15 16:18:18.461561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.076 #21 NEW cov: 12097 ft: 14082 corp: 16/50b lim: 10 exec/s: 0 rss: 73Mb L: 2/9 MS: 1 CopyPart- 00:07:33.076 [2024-07-15 16:18:18.501774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00001313 cdw11:00000000 00:07:33.076 [2024-07-15 16:18:18.501799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.076 [2024-07-15 16:18:18.501853] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00001313 cdw11:00000000 00:07:33.076 [2024-07-15 16:18:18.501867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.076 #22 NEW cov: 12097 ft: 14097 corp: 17/55b lim: 10 exec/s: 22 rss: 73Mb L: 5/9 MS: 1 CopyPart- 00:07:33.076 [2024-07-15 16:18:18.551930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a9c cdw11:00000000 00:07:33.076 [2024-07-15 16:18:18.551955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.076 [2024-07-15 16:18:18.552009] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00004040 cdw11:00000000 00:07:33.076 [2024-07-15 16:18:18.552024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.076 #23 NEW cov: 12097 ft: 14143 corp: 18/59b lim: 10 exec/s: 23 rss: 73Mb L: 4/9 MS: 1 CrossOver- 00:07:33.076 [2024-07-15 16:18:18.591883] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000052d cdw11:00000000 00:07:33.076 [2024-07-15 16:18:18.591909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.076 #24 NEW cov: 12097 ft: 14167 corp: 19/62b lim: 10 exec/s: 24 rss: 73Mb L: 3/9 MS: 1 InsertByte- 00:07:33.076 [2024-07-15 16:18:18.642262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000005ff cdw11:00000000 00:07:33.076 [2024-07-15 16:18:18.642288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.076 [2024-07-15 16:18:18.642341] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:33.076 [2024-07-15 16:18:18.642355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.076 [2024-07-15 16:18:18.642406] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00002d40 cdw11:00000000 00:07:33.076 [2024-07-15 16:18:18.642421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.335 #25 NEW cov: 12097 ft: 14299 corp: 20/68b lim: 10 exec/s: 25 rss: 73Mb L: 6/9 MS: 1 InsertRepeatedBytes- 00:07:33.335 [2024-07-15 16:18:18.692186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:33.335 [2024-07-15 16:18:18.692211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.335 #26 NEW cov: 12097 ft: 14331 corp: 21/71b lim: 10 exec/s: 26 rss: 73Mb L: 3/9 MS: 1 CopyPart- 00:07:33.335 [2024-07-15 16:18:18.742312] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:33.335 [2024-07-15 16:18:18.742337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.335 #27 NEW cov: 12097 ft: 14353 corp: 22/74b lim: 10 exec/s: 27 rss: 73Mb L: 3/9 MS: 1 ShuffleBytes- 00:07:33.335 [2024-07-15 16:18:18.782418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00009c40 cdw11:00000000 00:07:33.335 [2024-07-15 16:18:18.782443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.335 #28 NEW cov: 12097 ft: 14372 corp: 23/76b lim: 10 exec/s: 28 rss: 73Mb L: 2/9 MS: 1 ShuffleBytes- 00:07:33.335 [2024-07-15 16:18:18.822563] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002d21 cdw11:00000000 00:07:33.335 [2024-07-15 16:18:18.822589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.335 #29 NEW cov: 12097 ft: 14392 corp: 24/79b lim: 10 exec/s: 29 rss: 73Mb L: 3/9 MS: 1 InsertByte- 00:07:33.335 [2024-07-15 16:18:18.862768] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00001312 cdw11:00000000 00:07:33.335 [2024-07-15 16:18:18.862794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.335 [2024-07-15 16:18:18.862846] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00001313 cdw11:00000000 00:07:33.335 [2024-07-15 16:18:18.862860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.335 #30 NEW cov: 12097 ft: 14405 corp: 25/84b lim: 10 exec/s: 30 rss: 73Mb L: 5/9 MS: 1 ChangeBit- 00:07:33.335 [2024-07-15 16:18:18.912859] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000110a cdw11:00000000 00:07:33.335 [2024-07-15 16:18:18.912884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.594 #31 NEW cov: 12097 ft: 14413 corp: 26/86b lim: 10 exec/s: 31 rss: 73Mb L: 2/9 MS: 1 ChangeBinInt- 00:07:33.594 [2024-07-15 16:18:18.962950] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00009c40 cdw11:00000000 00:07:33.594 [2024-07-15 16:18:18.962975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.594 #33 NEW cov: 12097 ft: 14447 corp: 27/89b lim: 10 exec/s: 33 rss: 73Mb L: 3/9 MS: 2 EraseBytes-CrossOver- 00:07:33.594 [2024-07-15 16:18:19.003027] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00001160 cdw11:00000000 00:07:33.594 [2024-07-15 16:18:19.003052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.594 #34 NEW cov: 12097 ft: 14461 corp: 28/92b lim: 10 exec/s: 34 rss: 74Mb L: 3/9 MS: 1 CrossOver- 00:07:33.594 [2024-07-15 16:18:19.053199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00009c9c cdw11:00000000 00:07:33.594 [2024-07-15 16:18:19.053223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.594 #36 NEW cov: 12097 ft: 14498 corp: 29/94b lim: 10 exec/s: 36 rss: 74Mb L: 2/9 MS: 2 CrossOver-CopyPart- 00:07:33.594 [2024-07-15 16:18:19.093462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00001313 cdw11:00000000 00:07:33.594 [2024-07-15 16:18:19.093491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.594 [2024-07-15 16:18:19.093549] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00001313 cdw11:00000000 00:07:33.594 [2024-07-15 16:18:19.093564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.594 #37 NEW cov: 12097 ft: 14509 corp: 30/99b lim: 10 exec/s: 37 rss: 74Mb L: 5/9 MS: 1 ChangeByte- 00:07:33.594 [2024-07-15 16:18:19.133459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a9c cdw11:00000000 00:07:33.594 [2024-07-15 16:18:19.133484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.594 #38 NEW cov: 12097 ft: 14526 corp: 31/102b lim: 10 exec/s: 38 rss: 74Mb L: 3/9 MS: 1 ShuffleBytes- 00:07:33.852 [2024-07-15 16:18:19.183593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:33.853 [2024-07-15 16:18:19.183620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.853 #39 NEW cov: 12097 ft: 14528 corp: 32/105b lim: 10 exec/s: 39 rss: 74Mb L: 3/9 MS: 1 EraseBytes- 00:07:33.853 [2024-07-15 16:18:19.223827] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00001312 cdw11:00000000 00:07:33.853 [2024-07-15 16:18:19.223853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.853 [2024-07-15 16:18:19.223906] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00001312 cdw11:00000000 00:07:33.853 [2024-07-15 16:18:19.223921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.853 #40 NEW cov: 12097 ft: 14535 corp: 33/110b lim: 10 exec/s: 40 rss: 74Mb L: 5/9 MS: 1 ChangeBinInt- 00:07:33.853 [2024-07-15 16:18:19.273832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a13 cdw11:00000000 00:07:33.853 [2024-07-15 16:18:19.273859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.853 #41 NEW cov: 12097 ft: 14538 corp: 34/112b lim: 10 exec/s: 41 rss: 74Mb L: 2/9 MS: 1 CrossOver- 00:07:33.853 [2024-07-15 16:18:19.314201] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a34 cdw11:00000000 00:07:33.853 [2024-07-15 16:18:19.314227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.853 [2024-07-15 16:18:19.314281] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00003434 cdw11:00000000 00:07:33.853 [2024-07-15 16:18:19.314296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.853 [2024-07-15 16:18:19.314347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00003434 cdw11:00000000 00:07:33.853 [2024-07-15 16:18:19.314362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.853 #43 NEW cov: 12097 ft: 14635 corp: 35/119b lim: 10 exec/s: 43 rss: 74Mb L: 7/9 MS: 2 EraseBytes-InsertRepeatedBytes- 00:07:33.853 [2024-07-15 16:18:19.354091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00009c40 cdw11:00000000 00:07:33.853 [2024-07-15 16:18:19.354118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.853 #44 NEW cov: 12097 ft: 14661 corp: 36/122b lim: 10 exec/s: 44 rss: 74Mb L: 3/9 MS: 1 InsertByte- 00:07:33.853 [2024-07-15 16:18:19.394321] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00001300 cdw11:00000000 00:07:33.853 [2024-07-15 16:18:19.394350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.853 [2024-07-15 16:18:19.394404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00001313 cdw11:00000000 00:07:33.853 [2024-07-15 16:18:19.394419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.853 #45 NEW cov: 12097 ft: 14710 corp: 37/127b lim: 10 exec/s: 45 rss: 74Mb L: 5/9 MS: 1 ShuffleBytes- 00:07:34.112 [2024-07-15 16:18:19.434342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a9c cdw11:00000000 00:07:34.112 [2024-07-15 16:18:19.434370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.112 #46 NEW cov: 12097 ft: 14716 corp: 38/130b lim: 10 exec/s: 46 rss: 74Mb L: 3/9 MS: 1 CrossOver- 00:07:34.112 [2024-07-15 16:18:19.474415] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00003040 cdw11:00000000 00:07:34.112 [2024-07-15 16:18:19.474441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.112 #47 NEW cov: 12097 ft: 14731 corp: 39/133b lim: 10 exec/s: 23 rss: 74Mb L: 3/9 MS: 1 ChangeByte- 00:07:34.112 #47 DONE cov: 12097 ft: 14731 corp: 39/133b lim: 10 exec/s: 23 rss: 74Mb 00:07:34.112 Done 47 runs in 2 second(s) 00:07:34.112 16:18:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_6.conf /var/tmp/suppress_nvmf_fuzz 00:07:34.112 16:18:19 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:34.112 16:18:19 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:34.112 16:18:19 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 7 1 0x1 00:07:34.112 16:18:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=7 00:07:34.112 16:18:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:34.112 16:18:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:34.112 16:18:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:07:34.112 16:18:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_7.conf 00:07:34.112 16:18:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:34.112 16:18:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:34.112 16:18:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 7 00:07:34.112 16:18:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4407 00:07:34.112 16:18:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:07:34.112 16:18:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' 00:07:34.112 16:18:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4407"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:34.112 16:18:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:34.112 16:18:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:34.112 16:18:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' -c /tmp/fuzz_json_7.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 -Z 7 00:07:34.112 [2024-07-15 16:18:19.680642] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:07:34.112 [2024-07-15 16:18:19.680714] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1517373 ] 00:07:34.371 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.371 [2024-07-15 16:18:19.878590] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.631 [2024-07-15 16:18:19.950889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.631 [2024-07-15 16:18:20.010496] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:34.631 [2024-07-15 16:18:20.026708] tcp.c: 993:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4407 *** 00:07:34.631 INFO: Running with entropic power schedule (0xFF, 100). 00:07:34.631 INFO: Seed: 197022823 00:07:34.631 INFO: Loaded 1 modules (357850 inline 8-bit counters): 357850 [0x29ab30c, 0x2a028e6), 00:07:34.631 INFO: Loaded 1 PC tables (357850 PCs): 357850 [0x2a028e8,0x2f78688), 00:07:34.631 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:07:34.631 INFO: A corpus is not provided, starting from an empty corpus 00:07:34.631 #2 INITED exec/s: 0 rss: 64Mb 00:07:34.631 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:34.631 This may also happen if the target rejected all inputs we tried so far 00:07:34.631 [2024-07-15 16:18:20.072047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00009393 cdw11:00000000 00:07:34.631 [2024-07-15 16:18:20.072087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.889 NEW_FUNC[1/696]: 0x48f380 in fuzz_admin_delete_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:172 00:07:34.889 NEW_FUNC[2/696]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:34.889 #14 NEW cov: 11853 ft: 11845 corp: 2/3b lim: 10 exec/s: 0 rss: 72Mb L: 2/2 MS: 2 ChangeByte-CopyPart- 00:07:34.889 [2024-07-15 16:18:20.413252] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:34.889 [2024-07-15 16:18:20.413295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.889 [2024-07-15 16:18:20.413346] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:34.889 [2024-07-15 16:18:20.413360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.889 [2024-07-15 16:18:20.413410] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:34.889 [2024-07-15 16:18:20.413424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:34.889 [2024-07-15 16:18:20.413473] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:34.889 [2024-07-15 16:18:20.413486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:34.889 [2024-07-15 16:18:20.413539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000000e cdw11:00000000 00:07:34.889 [2024-07-15 16:18:20.413553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:34.889 #16 NEW cov: 11983 ft: 12802 corp: 3/13b lim: 10 exec/s: 0 rss: 72Mb L: 10/10 MS: 2 ChangeBit-InsertRepeatedBytes- 00:07:34.889 [2024-07-15 16:18:20.452856] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:34.889 [2024-07-15 16:18:20.452884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.146 #17 NEW cov: 11989 ft: 13142 corp: 4/15b lim: 10 exec/s: 0 rss: 72Mb L: 2/10 MS: 1 CrossOver- 00:07:35.146 [2024-07-15 16:18:20.493001] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:35.146 [2024-07-15 16:18:20.493031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.146 #18 NEW cov: 12074 ft: 13353 corp: 5/18b lim: 10 exec/s: 0 rss: 72Mb L: 3/10 MS: 1 CrossOver- 00:07:35.146 [2024-07-15 16:18:20.543619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.146 [2024-07-15 16:18:20.543646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.146 [2024-07-15 16:18:20.543697] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.146 [2024-07-15 16:18:20.543711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.146 [2024-07-15 16:18:20.543762] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.146 [2024-07-15 16:18:20.543777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.146 [2024-07-15 16:18:20.543827] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000a400 cdw11:00000000 00:07:35.147 [2024-07-15 16:18:20.543841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.147 [2024-07-15 16:18:20.543890] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000000e cdw11:00000000 00:07:35.147 [2024-07-15 16:18:20.543905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:35.147 #19 NEW cov: 12074 ft: 13439 corp: 6/28b lim: 10 exec/s: 0 rss: 72Mb L: 10/10 MS: 1 ChangeByte- 00:07:35.147 [2024-07-15 16:18:20.593270] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000e0a cdw11:00000000 00:07:35.147 [2024-07-15 16:18:20.593298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.147 #20 NEW cov: 12074 ft: 13519 corp: 7/30b lim: 10 exec/s: 0 rss: 72Mb L: 2/10 MS: 1 ChangeBit- 00:07:35.147 [2024-07-15 16:18:20.633511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:35.147 [2024-07-15 16:18:20.633544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.147 [2024-07-15 16:18:20.633593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:35.147 [2024-07-15 16:18:20.633607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.147 #21 NEW cov: 12074 ft: 13732 corp: 8/35b lim: 10 exec/s: 0 rss: 72Mb L: 5/10 MS: 1 CrossOver- 00:07:35.147 [2024-07-15 16:18:20.673921] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000000ff cdw11:00000000 00:07:35.147 [2024-07-15 16:18:20.673948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.147 [2024-07-15 16:18:20.673998] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:35.147 [2024-07-15 16:18:20.674012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.147 [2024-07-15 16:18:20.674060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000f900 cdw11:00000000 00:07:35.147 [2024-07-15 16:18:20.674075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.147 [2024-07-15 16:18:20.674125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.147 [2024-07-15 16:18:20.674143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.147 [2024-07-15 16:18:20.674192] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000000e cdw11:00000000 00:07:35.147 [2024-07-15 16:18:20.674206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:35.147 #22 NEW cov: 12074 ft: 13813 corp: 9/45b lim: 10 exec/s: 0 rss: 72Mb L: 10/10 MS: 1 ChangeBinInt- 00:07:35.147 [2024-07-15 16:18:20.713577] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00009390 cdw11:00000000 00:07:35.147 [2024-07-15 16:18:20.713603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.404 #23 NEW cov: 12074 ft: 13844 corp: 10/47b lim: 10 exec/s: 0 rss: 72Mb L: 2/10 MS: 1 ChangeBinInt- 00:07:35.404 [2024-07-15 16:18:20.764159] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.404 [2024-07-15 16:18:20.764184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.404 [2024-07-15 16:18:20.764234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.404 [2024-07-15 16:18:20.764250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.404 [2024-07-15 16:18:20.764298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.404 [2024-07-15 16:18:20.764311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.404 [2024-07-15 16:18:20.764359] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.404 [2024-07-15 16:18:20.764374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.404 [2024-07-15 16:18:20.764424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000000e cdw11:00000000 00:07:35.404 [2024-07-15 16:18:20.764438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:35.404 #24 NEW cov: 12074 ft: 13877 corp: 11/57b lim: 10 exec/s: 0 rss: 72Mb L: 10/10 MS: 1 CopyPart- 00:07:35.404 [2024-07-15 16:18:20.804298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000000ff cdw11:00000000 00:07:35.404 [2024-07-15 16:18:20.804323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.404 [2024-07-15 16:18:20.804373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:35.404 [2024-07-15 16:18:20.804387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.404 [2024-07-15 16:18:20.804438] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000900 cdw11:00000000 00:07:35.404 [2024-07-15 16:18:20.804452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.404 [2024-07-15 16:18:20.804502] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.404 [2024-07-15 16:18:20.804515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.404 [2024-07-15 16:18:20.804568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000000e cdw11:00000000 00:07:35.404 [2024-07-15 16:18:20.804582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:35.404 #25 NEW cov: 12074 ft: 13955 corp: 12/67b lim: 10 exec/s: 0 rss: 72Mb L: 10/10 MS: 1 CMP- DE: "\011\000"- 00:07:35.404 [2024-07-15 16:18:20.854063] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:35.404 [2024-07-15 16:18:20.854089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.404 #26 NEW cov: 12074 ft: 14028 corp: 13/70b lim: 10 exec/s: 0 rss: 72Mb L: 3/10 MS: 1 CopyPart- 00:07:35.404 [2024-07-15 16:18:20.904149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00009190 cdw11:00000000 00:07:35.404 [2024-07-15 16:18:20.904175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.404 #27 NEW cov: 12074 ft: 14048 corp: 14/72b lim: 10 exec/s: 0 rss: 73Mb L: 2/10 MS: 1 ChangeBit- 00:07:35.404 [2024-07-15 16:18:20.954699] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.404 [2024-07-15 16:18:20.954725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.404 [2024-07-15 16:18:20.954776] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000faff cdw11:00000000 00:07:35.404 [2024-07-15 16:18:20.954789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.404 [2024-07-15 16:18:20.954837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:35.404 [2024-07-15 16:18:20.954851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.404 [2024-07-15 16:18:20.954899] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.404 [2024-07-15 16:18:20.954914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.404 [2024-07-15 16:18:20.954961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000000e cdw11:00000000 00:07:35.404 [2024-07-15 16:18:20.954975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:35.404 NEW_FUNC[1/1]: 0x1a7eaf0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:35.404 #28 NEW cov: 12097 ft: 14093 corp: 15/82b lim: 10 exec/s: 0 rss: 73Mb L: 10/10 MS: 1 ChangeBinInt- 00:07:35.692 [2024-07-15 16:18:20.994342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00009311 cdw11:00000000 00:07:35.692 [2024-07-15 16:18:20.994369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.692 #29 NEW cov: 12097 ft: 14109 corp: 16/85b lim: 10 exec/s: 0 rss: 73Mb L: 3/10 MS: 1 InsertByte- 00:07:35.692 [2024-07-15 16:18:21.034906] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.692 [2024-07-15 16:18:21.034932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.692 [2024-07-15 16:18:21.034983] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.692 [2024-07-15 16:18:21.034997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.692 [2024-07-15 16:18:21.035047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.692 [2024-07-15 16:18:21.035061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.692 [2024-07-15 16:18:21.035114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.692 [2024-07-15 16:18:21.035127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.692 [2024-07-15 16:18:21.035175] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000000e cdw11:00000000 00:07:35.692 [2024-07-15 16:18:21.035189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:35.692 #30 NEW cov: 12097 ft: 14129 corp: 17/95b lim: 10 exec/s: 30 rss: 73Mb L: 10/10 MS: 1 CopyPart- 00:07:35.692 [2024-07-15 16:18:21.084624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a2f cdw11:00000000 00:07:35.692 [2024-07-15 16:18:21.084649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.692 #32 NEW cov: 12097 ft: 14155 corp: 18/97b lim: 10 exec/s: 32 rss: 73Mb L: 2/10 MS: 2 EraseBytes-InsertByte- 00:07:35.692 [2024-07-15 16:18:21.134753] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00009130 cdw11:00000000 00:07:35.692 [2024-07-15 16:18:21.134779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.692 #33 NEW cov: 12097 ft: 14207 corp: 19/100b lim: 10 exec/s: 33 rss: 73Mb L: 3/10 MS: 1 InsertByte- 00:07:35.692 [2024-07-15 16:18:21.185104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a2f cdw11:00000000 00:07:35.692 [2024-07-15 16:18:21.185129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.692 [2024-07-15 16:18:21.185181] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00007b7b cdw11:00000000 00:07:35.692 [2024-07-15 16:18:21.185195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.692 [2024-07-15 16:18:21.185245] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00007b7b cdw11:00000000 00:07:35.692 [2024-07-15 16:18:21.185260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.692 #34 NEW cov: 12097 ft: 14366 corp: 20/106b lim: 10 exec/s: 34 rss: 73Mb L: 6/10 MS: 1 InsertRepeatedBytes- 00:07:35.692 [2024-07-15 16:18:21.235302] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.692 [2024-07-15 16:18:21.235328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.692 [2024-07-15 16:18:21.235378] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.692 [2024-07-15 16:18:21.235391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.692 [2024-07-15 16:18:21.235441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.692 [2024-07-15 16:18:21.235454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.692 #35 NEW cov: 12097 ft: 14372 corp: 21/113b lim: 10 exec/s: 35 rss: 73Mb L: 7/10 MS: 1 EraseBytes- 00:07:35.951 [2024-07-15 16:18:21.275293] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:35.951 [2024-07-15 16:18:21.275318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.951 [2024-07-15 16:18:21.275368] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a2c cdw11:00000000 00:07:35.951 [2024-07-15 16:18:21.275384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.951 #36 NEW cov: 12097 ft: 14473 corp: 22/117b lim: 10 exec/s: 36 rss: 73Mb L: 4/10 MS: 1 InsertByte- 00:07:35.951 [2024-07-15 16:18:21.315619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.951 [2024-07-15 16:18:21.315644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.951 [2024-07-15 16:18:21.315694] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.951 [2024-07-15 16:18:21.315708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.951 [2024-07-15 16:18:21.315758] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.951 [2024-07-15 16:18:21.315773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.951 [2024-07-15 16:18:21.315823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000093 cdw11:00000000 00:07:35.951 [2024-07-15 16:18:21.315837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.951 #37 NEW cov: 12097 ft: 14493 corp: 23/126b lim: 10 exec/s: 37 rss: 73Mb L: 9/10 MS: 1 InsertRepeatedBytes- 00:07:35.951 [2024-07-15 16:18:21.355517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.951 [2024-07-15 16:18:21.355549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.951 [2024-07-15 16:18:21.355599] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.951 [2024-07-15 16:18:21.355613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.951 #38 NEW cov: 12097 ft: 14509 corp: 24/131b lim: 10 exec/s: 38 rss: 73Mb L: 5/10 MS: 1 EraseBytes- 00:07:35.951 [2024-07-15 16:18:21.405487] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:35.951 [2024-07-15 16:18:21.405513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.951 #39 NEW cov: 12097 ft: 14520 corp: 25/134b lim: 10 exec/s: 39 rss: 73Mb L: 3/10 MS: 1 InsertByte- 00:07:35.951 [2024-07-15 16:18:21.445620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00009190 cdw11:00000000 00:07:35.951 [2024-07-15 16:18:21.445645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.951 #40 NEW cov: 12097 ft: 14541 corp: 26/136b lim: 10 exec/s: 40 rss: 73Mb L: 2/10 MS: 1 CrossOver- 00:07:35.951 [2024-07-15 16:18:21.485776] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000e0e cdw11:00000000 00:07:35.951 [2024-07-15 16:18:21.485802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.951 #41 NEW cov: 12097 ft: 14556 corp: 27/139b lim: 10 exec/s: 41 rss: 73Mb L: 3/10 MS: 1 CopyPart- 00:07:35.951 [2024-07-15 16:18:21.526341] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.951 [2024-07-15 16:18:21.526367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.951 [2024-07-15 16:18:21.526419] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.951 [2024-07-15 16:18:21.526433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.951 [2024-07-15 16:18:21.526486] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.951 [2024-07-15 16:18:21.526501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.951 [2024-07-15 16:18:21.526554] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.951 [2024-07-15 16:18:21.526568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.951 [2024-07-15 16:18:21.526619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000000e cdw11:00000000 00:07:35.951 [2024-07-15 16:18:21.526634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:36.209 #42 NEW cov: 12097 ft: 14599 corp: 28/149b lim: 10 exec/s: 42 rss: 73Mb L: 10/10 MS: 1 ShuffleBytes- 00:07:36.210 [2024-07-15 16:18:21.576444] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000000ff cdw11:00000000 00:07:36.210 [2024-07-15 16:18:21.576470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.210 [2024-07-15 16:18:21.576522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00002fff cdw11:00000000 00:07:36.210 [2024-07-15 16:18:21.576540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.210 [2024-07-15 16:18:21.576592] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000900 cdw11:00000000 00:07:36.210 [2024-07-15 16:18:21.576607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.210 [2024-07-15 16:18:21.576656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:36.210 [2024-07-15 16:18:21.576670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.210 [2024-07-15 16:18:21.576719] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000000e cdw11:00000000 00:07:36.210 [2024-07-15 16:18:21.576734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:36.210 #43 NEW cov: 12097 ft: 14625 corp: 29/159b lim: 10 exec/s: 43 rss: 73Mb L: 10/10 MS: 1 ChangeByte- 00:07:36.210 [2024-07-15 16:18:21.626115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:36.210 [2024-07-15 16:18:21.626140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.210 #44 NEW cov: 12097 ft: 14661 corp: 30/162b lim: 10 exec/s: 44 rss: 73Mb L: 3/10 MS: 1 ChangeBit- 00:07:36.210 [2024-07-15 16:18:21.666369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:36.210 [2024-07-15 16:18:21.666394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.210 [2024-07-15 16:18:21.666443] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:36.210 [2024-07-15 16:18:21.666457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.210 #45 NEW cov: 12097 ft: 14713 corp: 31/166b lim: 10 exec/s: 45 rss: 73Mb L: 4/10 MS: 1 CopyPart- 00:07:36.210 [2024-07-15 16:18:21.716405] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:36.210 [2024-07-15 16:18:21.716430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.210 #46 NEW cov: 12097 ft: 14721 corp: 32/168b lim: 10 exec/s: 46 rss: 73Mb L: 2/10 MS: 1 ShuffleBytes- 00:07:36.210 [2024-07-15 16:18:21.756613] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a7b cdw11:00000000 00:07:36.210 [2024-07-15 16:18:21.756638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.210 [2024-07-15 16:18:21.756686] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00007b7b cdw11:00000000 00:07:36.210 [2024-07-15 16:18:21.756700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.468 #47 NEW cov: 12097 ft: 14744 corp: 33/172b lim: 10 exec/s: 47 rss: 73Mb L: 4/10 MS: 1 EraseBytes- 00:07:36.468 [2024-07-15 16:18:21.806631] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a28 cdw11:00000000 00:07:36.468 [2024-07-15 16:18:21.806657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.468 #48 NEW cov: 12097 ft: 14751 corp: 34/174b lim: 10 exec/s: 48 rss: 73Mb L: 2/10 MS: 1 ChangeByte- 00:07:36.468 [2024-07-15 16:18:21.856739] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00009390 cdw11:00000000 00:07:36.468 [2024-07-15 16:18:21.856765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.468 #49 NEW cov: 12097 ft: 14754 corp: 35/176b lim: 10 exec/s: 49 rss: 73Mb L: 2/10 MS: 1 CopyPart- 00:07:36.468 [2024-07-15 16:18:21.896874] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00003f0a cdw11:00000000 00:07:36.468 [2024-07-15 16:18:21.896901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.468 #50 NEW cov: 12097 ft: 14781 corp: 36/179b lim: 10 exec/s: 50 rss: 73Mb L: 3/10 MS: 1 ChangeByte- 00:07:36.468 [2024-07-15 16:18:21.937074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:36.468 [2024-07-15 16:18:21.937101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.468 [2024-07-15 16:18:21.937153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:36.468 [2024-07-15 16:18:21.937167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.468 #51 NEW cov: 12097 ft: 14789 corp: 37/183b lim: 10 exec/s: 51 rss: 74Mb L: 4/10 MS: 1 ShuffleBytes- 00:07:36.468 [2024-07-15 16:18:21.987242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000900 cdw11:00000000 00:07:36.468 [2024-07-15 16:18:21.987268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.468 [2024-07-15 16:18:21.987319] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000e0a cdw11:00000000 00:07:36.468 [2024-07-15 16:18:21.987332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.468 #52 NEW cov: 12097 ft: 14799 corp: 38/187b lim: 10 exec/s: 52 rss: 74Mb L: 4/10 MS: 1 PersAutoDict- DE: "\011\000"- 00:07:36.468 [2024-07-15 16:18:22.027434] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a2f cdw11:00000000 00:07:36.468 [2024-07-15 16:18:22.027459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.468 [2024-07-15 16:18:22.027509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000900 cdw11:00000000 00:07:36.468 [2024-07-15 16:18:22.027523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.468 [2024-07-15 16:18:22.027579] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00007b7b cdw11:00000000 00:07:36.468 [2024-07-15 16:18:22.027593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.727 #53 NEW cov: 12097 ft: 14803 corp: 39/193b lim: 10 exec/s: 53 rss: 74Mb L: 6/10 MS: 1 PersAutoDict- DE: "\011\000"- 00:07:36.727 [2024-07-15 16:18:22.067808] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:36.727 [2024-07-15 16:18:22.067834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.727 [2024-07-15 16:18:22.067884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:36.727 [2024-07-15 16:18:22.067899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.727 [2024-07-15 16:18:22.067948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:36.727 [2024-07-15 16:18:22.067961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.727 [2024-07-15 16:18:22.068010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:36.727 [2024-07-15 16:18:22.068024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.727 [2024-07-15 16:18:22.068074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000000e cdw11:00000000 00:07:36.728 [2024-07-15 16:18:22.068087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:36.728 #54 NEW cov: 12097 ft: 14813 corp: 40/203b lim: 10 exec/s: 27 rss: 74Mb L: 10/10 MS: 1 ChangeBinInt- 00:07:36.728 #54 DONE cov: 12097 ft: 14813 corp: 40/203b lim: 10 exec/s: 27 rss: 74Mb 00:07:36.728 ###### Recommended dictionary. ###### 00:07:36.728 "\011\000" # Uses: 2 00:07:36.728 ###### End of recommended dictionary. ###### 00:07:36.728 Done 54 runs in 2 second(s) 00:07:36.728 16:18:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_7.conf /var/tmp/suppress_nvmf_fuzz 00:07:36.728 16:18:22 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:36.728 16:18:22 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:36.728 16:18:22 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 8 1 0x1 00:07:36.728 16:18:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=8 00:07:36.728 16:18:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:36.728 16:18:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:36.728 16:18:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:07:36.728 16:18:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_8.conf 00:07:36.728 16:18:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:36.728 16:18:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:36.728 16:18:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 8 00:07:36.728 16:18:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4408 00:07:36.728 16:18:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:07:36.728 16:18:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' 00:07:36.728 16:18:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4408"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:36.728 16:18:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:36.728 16:18:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:36.728 16:18:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' -c /tmp/fuzz_json_8.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 -Z 8 00:07:36.728 [2024-07-15 16:18:22.263379] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:07:36.728 [2024-07-15 16:18:22.263452] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1517732 ] 00:07:36.728 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.986 [2024-07-15 16:18:22.458982] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.986 [2024-07-15 16:18:22.531661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.245 [2024-07-15 16:18:22.591270] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:37.245 [2024-07-15 16:18:22.607479] tcp.c: 993:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4408 *** 00:07:37.245 INFO: Running with entropic power schedule (0xFF, 100). 00:07:37.245 INFO: Seed: 2774665887 00:07:37.245 INFO: Loaded 1 modules (357850 inline 8-bit counters): 357850 [0x29ab30c, 0x2a028e6), 00:07:37.245 INFO: Loaded 1 PC tables (357850 PCs): 357850 [0x2a028e8,0x2f78688), 00:07:37.245 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:07:37.245 INFO: A corpus is not provided, starting from an empty corpus 00:07:37.245 [2024-07-15 16:18:22.655407] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.245 [2024-07-15 16:18:22.655444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.245 #2 INITED cov: 11881 ft: 11882 corp: 1/1b exec/s: 0 rss: 70Mb 00:07:37.245 [2024-07-15 16:18:22.705369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.245 [2024-07-15 16:18:22.705401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.245 #3 NEW cov: 12011 ft: 12527 corp: 2/2b lim: 5 exec/s: 0 rss: 71Mb L: 1/1 MS: 1 ChangeBit- 00:07:37.245 [2024-07-15 16:18:22.785852] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.245 [2024-07-15 16:18:22.785883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.245 [2024-07-15 16:18:22.785917] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.245 [2024-07-15 16:18:22.785932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.245 [2024-07-15 16:18:22.785961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.245 [2024-07-15 16:18:22.785976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.245 [2024-07-15 16:18:22.786006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.245 [2024-07-15 16:18:22.786021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:37.245 [2024-07-15 16:18:22.786054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.246 [2024-07-15 16:18:22.786068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:37.504 #4 NEW cov: 12017 ft: 13528 corp: 3/7b lim: 5 exec/s: 0 rss: 71Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:07:37.504 [2024-07-15 16:18:22.866056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.504 [2024-07-15 16:18:22.866087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.504 [2024-07-15 16:18:22.866120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.504 [2024-07-15 16:18:22.866136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.504 [2024-07-15 16:18:22.866165] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.504 [2024-07-15 16:18:22.866181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.504 [2024-07-15 16:18:22.866210] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.504 [2024-07-15 16:18:22.866225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:37.504 [2024-07-15 16:18:22.866253] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.504 [2024-07-15 16:18:22.866268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:37.504 #5 NEW cov: 12102 ft: 13851 corp: 4/12b lim: 5 exec/s: 0 rss: 71Mb L: 5/5 MS: 1 ShuffleBytes- 00:07:37.504 [2024-07-15 16:18:22.946253] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.504 [2024-07-15 16:18:22.946284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.504 [2024-07-15 16:18:22.946318] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.504 [2024-07-15 16:18:22.946333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.504 [2024-07-15 16:18:22.946363] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.504 [2024-07-15 16:18:22.946378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.504 [2024-07-15 16:18:22.946407] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.504 [2024-07-15 16:18:22.946422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:37.504 [2024-07-15 16:18:22.946451] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.504 [2024-07-15 16:18:22.946466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:37.504 #6 NEW cov: 12102 ft: 13926 corp: 5/17b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 CopyPart- 00:07:37.504 [2024-07-15 16:18:23.026333] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.504 [2024-07-15 16:18:23.026364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.504 [2024-07-15 16:18:23.026397] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.504 [2024-07-15 16:18:23.026412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.505 [2024-07-15 16:18:23.026441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.505 [2024-07-15 16:18:23.026457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.505 #7 NEW cov: 12102 ft: 14197 corp: 6/20b lim: 5 exec/s: 0 rss: 72Mb L: 3/5 MS: 1 EraseBytes- 00:07:37.765 [2024-07-15 16:18:23.086381] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.765 [2024-07-15 16:18:23.086414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.765 #8 NEW cov: 12102 ft: 14283 corp: 7/21b lim: 5 exec/s: 0 rss: 72Mb L: 1/5 MS: 1 ChangeByte- 00:07:37.765 [2024-07-15 16:18:23.146491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.765 [2024-07-15 16:18:23.146524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.765 #9 NEW cov: 12102 ft: 14432 corp: 8/22b lim: 5 exec/s: 0 rss: 72Mb L: 1/5 MS: 1 ShuffleBytes- 00:07:37.765 [2024-07-15 16:18:23.226966] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.765 [2024-07-15 16:18:23.226999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.765 [2024-07-15 16:18:23.227032] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.765 [2024-07-15 16:18:23.227048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.765 [2024-07-15 16:18:23.227078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.765 [2024-07-15 16:18:23.227093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.765 [2024-07-15 16:18:23.227122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.765 [2024-07-15 16:18:23.227137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:37.765 [2024-07-15 16:18:23.227165] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.765 [2024-07-15 16:18:23.227180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:37.765 #10 NEW cov: 12102 ft: 14457 corp: 9/27b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 ShuffleBytes- 00:07:37.765 [2024-07-15 16:18:23.307170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.765 [2024-07-15 16:18:23.307206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.765 [2024-07-15 16:18:23.307240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.765 [2024-07-15 16:18:23.307256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.765 [2024-07-15 16:18:23.307286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.765 [2024-07-15 16:18:23.307302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.765 [2024-07-15 16:18:23.307331] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.765 [2024-07-15 16:18:23.307345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:37.765 [2024-07-15 16:18:23.307375] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.765 [2024-07-15 16:18:23.307390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:38.023 #11 NEW cov: 12102 ft: 14501 corp: 10/32b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 ChangeByte- 00:07:38.023 [2024-07-15 16:18:23.367072] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.023 [2024-07-15 16:18:23.367103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.023 #12 NEW cov: 12102 ft: 14563 corp: 11/33b lim: 5 exec/s: 0 rss: 72Mb L: 1/5 MS: 1 ChangeByte- 00:07:38.023 [2024-07-15 16:18:23.427217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.023 [2024-07-15 16:18:23.427251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.023 #13 NEW cov: 12102 ft: 14605 corp: 12/34b lim: 5 exec/s: 0 rss: 72Mb L: 1/5 MS: 1 CrossOver- 00:07:38.023 [2024-07-15 16:18:23.477607] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.023 [2024-07-15 16:18:23.477637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.023 [2024-07-15 16:18:23.477671] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.023 [2024-07-15 16:18:23.477687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.023 [2024-07-15 16:18:23.477717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.023 [2024-07-15 16:18:23.477731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.023 [2024-07-15 16:18:23.477760] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.023 [2024-07-15 16:18:23.477775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.023 [2024-07-15 16:18:23.477804] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.023 [2024-07-15 16:18:23.477823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:38.282 NEW_FUNC[1/1]: 0x1a7eaf0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:38.282 #14 NEW cov: 12125 ft: 14629 corp: 13/39b lim: 5 exec/s: 14 rss: 73Mb L: 5/5 MS: 1 ChangeByte- 00:07:38.282 [2024-07-15 16:18:23.828695] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.282 [2024-07-15 16:18:23.828746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.282 [2024-07-15 16:18:23.828796] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.282 [2024-07-15 16:18:23.828812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.282 [2024-07-15 16:18:23.828843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.282 [2024-07-15 16:18:23.828859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.282 [2024-07-15 16:18:23.828889] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.282 [2024-07-15 16:18:23.828905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.282 [2024-07-15 16:18:23.828935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.282 [2024-07-15 16:18:23.828950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:38.542 #15 NEW cov: 12125 ft: 14707 corp: 14/44b lim: 5 exec/s: 15 rss: 73Mb L: 5/5 MS: 1 CopyPart- 00:07:38.542 [2024-07-15 16:18:23.888758] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.542 [2024-07-15 16:18:23.888792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.542 [2024-07-15 16:18:23.888842] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.542 [2024-07-15 16:18:23.888858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.542 [2024-07-15 16:18:23.888888] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.542 [2024-07-15 16:18:23.888904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.542 [2024-07-15 16:18:23.888934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.542 [2024-07-15 16:18:23.888950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.542 [2024-07-15 16:18:23.888979] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.542 [2024-07-15 16:18:23.888994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:38.542 #16 NEW cov: 12125 ft: 14788 corp: 15/49b lim: 5 exec/s: 16 rss: 73Mb L: 5/5 MS: 1 ShuffleBytes- 00:07:38.542 [2024-07-15 16:18:23.948833] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.542 [2024-07-15 16:18:23.948864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.542 [2024-07-15 16:18:23.948913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.542 [2024-07-15 16:18:23.948930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.542 [2024-07-15 16:18:23.948961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.542 [2024-07-15 16:18:23.948976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.542 [2024-07-15 16:18:23.949006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.542 [2024-07-15 16:18:23.949022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.542 [2024-07-15 16:18:23.949052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.542 [2024-07-15 16:18:23.949067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:38.542 #17 NEW cov: 12125 ft: 14864 corp: 16/54b lim: 5 exec/s: 17 rss: 73Mb L: 5/5 MS: 1 CopyPart- 00:07:38.542 [2024-07-15 16:18:24.029076] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.542 [2024-07-15 16:18:24.029107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.542 [2024-07-15 16:18:24.029156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.542 [2024-07-15 16:18:24.029172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.542 [2024-07-15 16:18:24.029201] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.542 [2024-07-15 16:18:24.029217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.542 [2024-07-15 16:18:24.029247] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.542 [2024-07-15 16:18:24.029262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.542 [2024-07-15 16:18:24.029292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.542 [2024-07-15 16:18:24.029308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:38.542 #18 NEW cov: 12125 ft: 14919 corp: 17/59b lim: 5 exec/s: 18 rss: 74Mb L: 5/5 MS: 1 ShuffleBytes- 00:07:38.542 [2024-07-15 16:18:24.109271] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.542 [2024-07-15 16:18:24.109301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.542 [2024-07-15 16:18:24.109355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.542 [2024-07-15 16:18:24.109371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.542 [2024-07-15 16:18:24.109401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.542 [2024-07-15 16:18:24.109416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.542 [2024-07-15 16:18:24.109447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.542 [2024-07-15 16:18:24.109462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.542 [2024-07-15 16:18:24.109492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.542 [2024-07-15 16:18:24.109507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:38.801 #19 NEW cov: 12125 ft: 14950 corp: 18/64b lim: 5 exec/s: 19 rss: 74Mb L: 5/5 MS: 1 CopyPart- 00:07:38.801 [2024-07-15 16:18:24.189483] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.801 [2024-07-15 16:18:24.189512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.801 [2024-07-15 16:18:24.189573] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.801 [2024-07-15 16:18:24.189590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.801 [2024-07-15 16:18:24.189621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.801 [2024-07-15 16:18:24.189637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.801 [2024-07-15 16:18:24.189666] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.801 [2024-07-15 16:18:24.189682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.801 [2024-07-15 16:18:24.189711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.801 [2024-07-15 16:18:24.189727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:38.801 #20 NEW cov: 12125 ft: 15004 corp: 19/69b lim: 5 exec/s: 20 rss: 74Mb L: 5/5 MS: 1 ShuffleBytes- 00:07:38.801 [2024-07-15 16:18:24.269445] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.801 [2024-07-15 16:18:24.269477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.801 #21 NEW cov: 12125 ft: 15073 corp: 20/70b lim: 5 exec/s: 21 rss: 74Mb L: 1/5 MS: 1 ShuffleBytes- 00:07:38.801 [2024-07-15 16:18:24.349911] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.801 [2024-07-15 16:18:24.349946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.801 [2024-07-15 16:18:24.349996] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.801 [2024-07-15 16:18:24.350012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.801 [2024-07-15 16:18:24.350042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.801 [2024-07-15 16:18:24.350057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.801 [2024-07-15 16:18:24.350087] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.802 [2024-07-15 16:18:24.350103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.802 [2024-07-15 16:18:24.350133] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.802 [2024-07-15 16:18:24.350151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:39.060 #22 NEW cov: 12125 ft: 15080 corp: 21/75b lim: 5 exec/s: 22 rss: 74Mb L: 5/5 MS: 1 ChangeByte- 00:07:39.060 [2024-07-15 16:18:24.400004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.060 [2024-07-15 16:18:24.400036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.060 [2024-07-15 16:18:24.400085] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.060 [2024-07-15 16:18:24.400102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.060 [2024-07-15 16:18:24.400132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.060 [2024-07-15 16:18:24.400147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.060 [2024-07-15 16:18:24.400177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.060 [2024-07-15 16:18:24.400193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:39.060 [2024-07-15 16:18:24.400222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.060 [2024-07-15 16:18:24.400239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:39.060 #23 NEW cov: 12125 ft: 15094 corp: 22/80b lim: 5 exec/s: 23 rss: 74Mb L: 5/5 MS: 1 CMP- DE: "\376\377"- 00:07:39.060 [2024-07-15 16:18:24.450153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.060 [2024-07-15 16:18:24.450184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.060 [2024-07-15 16:18:24.450233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.060 [2024-07-15 16:18:24.450253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.060 [2024-07-15 16:18:24.450283] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.060 [2024-07-15 16:18:24.450299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.060 [2024-07-15 16:18:24.450329] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.060 [2024-07-15 16:18:24.450344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:39.060 [2024-07-15 16:18:24.450373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.060 [2024-07-15 16:18:24.450389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:39.060 #24 NEW cov: 12125 ft: 15104 corp: 23/85b lim: 5 exec/s: 24 rss: 74Mb L: 5/5 MS: 1 ChangeBit- 00:07:39.060 [2024-07-15 16:18:24.500110] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.060 [2024-07-15 16:18:24.500139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.060 [2024-07-15 16:18:24.500188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.060 [2024-07-15 16:18:24.500205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.060 #25 NEW cov: 12125 ft: 15306 corp: 24/87b lim: 5 exec/s: 25 rss: 74Mb L: 2/5 MS: 1 CopyPart- 00:07:39.061 [2024-07-15 16:18:24.580503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.061 [2024-07-15 16:18:24.580541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.061 [2024-07-15 16:18:24.580591] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.061 [2024-07-15 16:18:24.580607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.061 [2024-07-15 16:18:24.580637] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.061 [2024-07-15 16:18:24.580653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.061 [2024-07-15 16:18:24.580682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.061 [2024-07-15 16:18:24.580698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:39.061 [2024-07-15 16:18:24.580727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.061 [2024-07-15 16:18:24.580743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:39.320 #26 NEW cov: 12125 ft: 15327 corp: 25/92b lim: 5 exec/s: 13 rss: 74Mb L: 5/5 MS: 1 ChangeBit- 00:07:39.320 #26 DONE cov: 12125 ft: 15327 corp: 25/92b lim: 5 exec/s: 13 rss: 74Mb 00:07:39.320 ###### Recommended dictionary. ###### 00:07:39.320 "\376\377" # Uses: 0 00:07:39.320 ###### End of recommended dictionary. ###### 00:07:39.320 Done 26 runs in 2 second(s) 00:07:39.320 16:18:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_8.conf /var/tmp/suppress_nvmf_fuzz 00:07:39.320 16:18:24 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:39.320 16:18:24 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:39.320 16:18:24 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 9 1 0x1 00:07:39.320 16:18:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=9 00:07:39.320 16:18:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:39.320 16:18:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:39.320 16:18:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:07:39.320 16:18:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_9.conf 00:07:39.320 16:18:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:39.320 16:18:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:39.320 16:18:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 9 00:07:39.320 16:18:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4409 00:07:39.320 16:18:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:07:39.320 16:18:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' 00:07:39.320 16:18:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4409"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:39.320 16:18:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:39.320 16:18:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:39.320 16:18:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' -c /tmp/fuzz_json_9.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 -Z 9 00:07:39.320 [2024-07-15 16:18:24.828400] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:07:39.320 [2024-07-15 16:18:24.828497] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1518031 ] 00:07:39.320 EAL: No free 2048 kB hugepages reported on node 1 00:07:39.579 [2024-07-15 16:18:25.029287] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.579 [2024-07-15 16:18:25.101310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.838 [2024-07-15 16:18:25.160915] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:39.838 [2024-07-15 16:18:25.177090] tcp.c: 993:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4409 *** 00:07:39.838 INFO: Running with entropic power schedule (0xFF, 100). 00:07:39.838 INFO: Seed: 1050698498 00:07:39.838 INFO: Loaded 1 modules (357850 inline 8-bit counters): 357850 [0x29ab30c, 0x2a028e6), 00:07:39.838 INFO: Loaded 1 PC tables (357850 PCs): 357850 [0x2a028e8,0x2f78688), 00:07:39.838 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:07:39.838 INFO: A corpus is not provided, starting from an empty corpus 00:07:39.838 [2024-07-15 16:18:25.222460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.838 [2024-07-15 16:18:25.222489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.838 #2 INITED cov: 11870 ft: 11881 corp: 1/1b exec/s: 0 rss: 70Mb 00:07:39.838 [2024-07-15 16:18:25.263043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.838 [2024-07-15 16:18:25.263073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.838 [2024-07-15 16:18:25.263129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.838 [2024-07-15 16:18:25.263143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.838 [2024-07-15 16:18:25.263195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.838 [2024-07-15 16:18:25.263208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.838 [2024-07-15 16:18:25.263261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.838 [2024-07-15 16:18:25.263274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:39.838 [2024-07-15 16:18:25.263325] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.838 [2024-07-15 16:18:25.263338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:39.838 #3 NEW cov: 12011 ft: 13267 corp: 2/6b lim: 5 exec/s: 0 rss: 70Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:07:39.838 [2024-07-15 16:18:25.312737] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.838 [2024-07-15 16:18:25.312763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.838 [2024-07-15 16:18:25.312815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.838 [2024-07-15 16:18:25.312828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.838 #4 NEW cov: 12017 ft: 13580 corp: 3/8b lim: 5 exec/s: 0 rss: 70Mb L: 2/5 MS: 1 CrossOver- 00:07:39.838 [2024-07-15 16:18:25.352893] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.838 [2024-07-15 16:18:25.352918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.838 [2024-07-15 16:18:25.352971] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.838 [2024-07-15 16:18:25.352985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.838 #5 NEW cov: 12102 ft: 13833 corp: 4/10b lim: 5 exec/s: 0 rss: 70Mb L: 2/5 MS: 1 CrossOver- 00:07:39.838 [2024-07-15 16:18:25.392982] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.838 [2024-07-15 16:18:25.393006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.838 [2024-07-15 16:18:25.393060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.838 [2024-07-15 16:18:25.393074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.097 #6 NEW cov: 12102 ft: 13917 corp: 5/12b lim: 5 exec/s: 0 rss: 70Mb L: 2/5 MS: 1 ChangeBit- 00:07:40.097 [2024-07-15 16:18:25.443165] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.097 [2024-07-15 16:18:25.443191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.097 [2024-07-15 16:18:25.443245] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.097 [2024-07-15 16:18:25.443258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.097 #7 NEW cov: 12102 ft: 14119 corp: 6/14b lim: 5 exec/s: 0 rss: 71Mb L: 2/5 MS: 1 CrossOver- 00:07:40.097 [2024-07-15 16:18:25.493287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.097 [2024-07-15 16:18:25.493311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.097 [2024-07-15 16:18:25.493365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.097 [2024-07-15 16:18:25.493378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.097 #8 NEW cov: 12102 ft: 14163 corp: 7/16b lim: 5 exec/s: 0 rss: 71Mb L: 2/5 MS: 1 ChangeBinInt- 00:07:40.097 [2024-07-15 16:18:25.533483] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.097 [2024-07-15 16:18:25.533507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.097 [2024-07-15 16:18:25.533566] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.097 [2024-07-15 16:18:25.533580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.097 [2024-07-15 16:18:25.533630] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.097 [2024-07-15 16:18:25.533643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.097 #9 NEW cov: 12102 ft: 14369 corp: 8/19b lim: 5 exec/s: 0 rss: 71Mb L: 3/5 MS: 1 CrossOver- 00:07:40.097 [2024-07-15 16:18:25.583518] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.097 [2024-07-15 16:18:25.583546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.097 [2024-07-15 16:18:25.583600] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.097 [2024-07-15 16:18:25.583614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.097 #10 NEW cov: 12102 ft: 14392 corp: 9/21b lim: 5 exec/s: 0 rss: 71Mb L: 2/5 MS: 1 CopyPart- 00:07:40.097 [2024-07-15 16:18:25.623765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.097 [2024-07-15 16:18:25.623789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.097 [2024-07-15 16:18:25.623844] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.097 [2024-07-15 16:18:25.623860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.097 [2024-07-15 16:18:25.623915] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.097 [2024-07-15 16:18:25.623929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.097 #11 NEW cov: 12102 ft: 14472 corp: 10/24b lim: 5 exec/s: 0 rss: 71Mb L: 3/5 MS: 1 CopyPart- 00:07:40.097 [2024-07-15 16:18:25.673550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.097 [2024-07-15 16:18:25.673576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.356 #12 NEW cov: 12102 ft: 14516 corp: 11/25b lim: 5 exec/s: 0 rss: 71Mb L: 1/5 MS: 1 EraseBytes- 00:07:40.356 [2024-07-15 16:18:25.723903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.356 [2024-07-15 16:18:25.723927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.356 [2024-07-15 16:18:25.723983] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.356 [2024-07-15 16:18:25.723997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.356 #13 NEW cov: 12102 ft: 14552 corp: 12/27b lim: 5 exec/s: 0 rss: 71Mb L: 2/5 MS: 1 ChangeByte- 00:07:40.356 [2024-07-15 16:18:25.764025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.356 [2024-07-15 16:18:25.764051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.356 [2024-07-15 16:18:25.764123] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.356 [2024-07-15 16:18:25.764138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.356 #14 NEW cov: 12102 ft: 14600 corp: 13/29b lim: 5 exec/s: 0 rss: 71Mb L: 2/5 MS: 1 ShuffleBytes- 00:07:40.356 [2024-07-15 16:18:25.804563] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.356 [2024-07-15 16:18:25.804588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.356 [2024-07-15 16:18:25.804661] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.356 [2024-07-15 16:18:25.804675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.356 [2024-07-15 16:18:25.804728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.356 [2024-07-15 16:18:25.804742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.356 [2024-07-15 16:18:25.804796] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.356 [2024-07-15 16:18:25.804810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.356 [2024-07-15 16:18:25.804868] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.356 [2024-07-15 16:18:25.804881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:40.356 #15 NEW cov: 12102 ft: 14686 corp: 14/34b lim: 5 exec/s: 0 rss: 71Mb L: 5/5 MS: 1 CopyPart- 00:07:40.356 [2024-07-15 16:18:25.854390] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.356 [2024-07-15 16:18:25.854415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.356 [2024-07-15 16:18:25.854470] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.357 [2024-07-15 16:18:25.854484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.357 [2024-07-15 16:18:25.854540] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.357 [2024-07-15 16:18:25.854553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.357 #16 NEW cov: 12102 ft: 14784 corp: 15/37b lim: 5 exec/s: 0 rss: 71Mb L: 3/5 MS: 1 ShuffleBytes- 00:07:40.357 [2024-07-15 16:18:25.904212] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.357 [2024-07-15 16:18:25.904236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.357 #17 NEW cov: 12102 ft: 14798 corp: 16/38b lim: 5 exec/s: 0 rss: 72Mb L: 1/5 MS: 1 EraseBytes- 00:07:40.615 [2024-07-15 16:18:25.944492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.615 [2024-07-15 16:18:25.944519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.615 [2024-07-15 16:18:25.944580] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.615 [2024-07-15 16:18:25.944594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.615 #18 NEW cov: 12102 ft: 14824 corp: 17/40b lim: 5 exec/s: 0 rss: 72Mb L: 2/5 MS: 1 ChangeBinInt- 00:07:40.615 [2024-07-15 16:18:25.995087] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.615 [2024-07-15 16:18:25.995113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.615 [2024-07-15 16:18:25.995168] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.615 [2024-07-15 16:18:25.995182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.615 [2024-07-15 16:18:25.995235] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.615 [2024-07-15 16:18:25.995248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.615 [2024-07-15 16:18:25.995300] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.615 [2024-07-15 16:18:25.995316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.615 [2024-07-15 16:18:25.995367] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.615 [2024-07-15 16:18:25.995381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:40.615 #19 NEW cov: 12102 ft: 14878 corp: 18/45b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 ShuffleBytes- 00:07:40.615 [2024-07-15 16:18:26.034881] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.615 [2024-07-15 16:18:26.034905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.615 [2024-07-15 16:18:26.034960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.615 [2024-07-15 16:18:26.034974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.615 [2024-07-15 16:18:26.035029] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.615 [2024-07-15 16:18:26.035042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.615 #20 NEW cov: 12102 ft: 14923 corp: 19/48b lim: 5 exec/s: 0 rss: 72Mb L: 3/5 MS: 1 ChangeByte- 00:07:40.616 [2024-07-15 16:18:26.074740] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.616 [2024-07-15 16:18:26.074765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.874 NEW_FUNC[1/1]: 0x1a7eaf0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:40.874 #21 NEW cov: 12125 ft: 14941 corp: 20/49b lim: 5 exec/s: 21 rss: 73Mb L: 1/5 MS: 1 ChangeByte- 00:07:40.874 [2024-07-15 16:18:26.415844] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.874 [2024-07-15 16:18:26.415885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.874 [2024-07-15 16:18:26.415941] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.874 [2024-07-15 16:18:26.415955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.134 #22 NEW cov: 12125 ft: 14960 corp: 21/51b lim: 5 exec/s: 22 rss: 73Mb L: 2/5 MS: 1 ShuffleBytes- 00:07:41.134 [2024-07-15 16:18:26.466019] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.134 [2024-07-15 16:18:26.466045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.134 [2024-07-15 16:18:26.466101] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.134 [2024-07-15 16:18:26.466115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.134 [2024-07-15 16:18:26.466168] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.134 [2024-07-15 16:18:26.466184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.134 #23 NEW cov: 12125 ft: 15017 corp: 22/54b lim: 5 exec/s: 23 rss: 73Mb L: 3/5 MS: 1 CopyPart- 00:07:41.134 [2024-07-15 16:18:26.516298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.134 [2024-07-15 16:18:26.516322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.134 [2024-07-15 16:18:26.516379] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.134 [2024-07-15 16:18:26.516394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.134 [2024-07-15 16:18:26.516447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.134 [2024-07-15 16:18:26.516460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.134 [2024-07-15 16:18:26.516512] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.134 [2024-07-15 16:18:26.516525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:41.134 #24 NEW cov: 12125 ft: 15039 corp: 23/58b lim: 5 exec/s: 24 rss: 73Mb L: 4/5 MS: 1 InsertRepeatedBytes- 00:07:41.134 [2024-07-15 16:18:26.566264] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.134 [2024-07-15 16:18:26.566289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.134 [2024-07-15 16:18:26.566344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.134 [2024-07-15 16:18:26.566357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.134 [2024-07-15 16:18:26.566412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.134 [2024-07-15 16:18:26.566425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.134 #25 NEW cov: 12125 ft: 15050 corp: 24/61b lim: 5 exec/s: 25 rss: 73Mb L: 3/5 MS: 1 InsertByte- 00:07:41.134 [2024-07-15 16:18:26.606667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.134 [2024-07-15 16:18:26.606692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.134 [2024-07-15 16:18:26.606747] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.134 [2024-07-15 16:18:26.606761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.134 [2024-07-15 16:18:26.606812] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.134 [2024-07-15 16:18:26.606825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.134 [2024-07-15 16:18:26.606877] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.134 [2024-07-15 16:18:26.606893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:41.134 [2024-07-15 16:18:26.606946] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.134 [2024-07-15 16:18:26.606959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:41.134 #26 NEW cov: 12125 ft: 15088 corp: 25/66b lim: 5 exec/s: 26 rss: 73Mb L: 5/5 MS: 1 ChangeBit- 00:07:41.134 [2024-07-15 16:18:26.656850] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.134 [2024-07-15 16:18:26.656875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.134 [2024-07-15 16:18:26.656930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.134 [2024-07-15 16:18:26.656944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.134 [2024-07-15 16:18:26.656996] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.134 [2024-07-15 16:18:26.657010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.134 [2024-07-15 16:18:26.657060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.134 [2024-07-15 16:18:26.657073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:41.134 [2024-07-15 16:18:26.657122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.134 [2024-07-15 16:18:26.657135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:41.134 #27 NEW cov: 12125 ft: 15109 corp: 26/71b lim: 5 exec/s: 27 rss: 73Mb L: 5/5 MS: 1 CopyPart- 00:07:41.134 [2024-07-15 16:18:26.706531] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.135 [2024-07-15 16:18:26.706555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.135 [2024-07-15 16:18:26.706624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.135 [2024-07-15 16:18:26.706638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.394 #28 NEW cov: 12125 ft: 15115 corp: 27/73b lim: 5 exec/s: 28 rss: 73Mb L: 2/5 MS: 1 ShuffleBytes- 00:07:41.394 [2024-07-15 16:18:26.746634] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.394 [2024-07-15 16:18:26.746659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.394 [2024-07-15 16:18:26.746711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.394 [2024-07-15 16:18:26.746725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.394 #29 NEW cov: 12125 ft: 15127 corp: 28/75b lim: 5 exec/s: 29 rss: 73Mb L: 2/5 MS: 1 ChangeBit- 00:07:41.394 [2024-07-15 16:18:26.796813] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.394 [2024-07-15 16:18:26.796837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.394 [2024-07-15 16:18:26.796908] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.394 [2024-07-15 16:18:26.796921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.394 #30 NEW cov: 12125 ft: 15131 corp: 29/77b lim: 5 exec/s: 30 rss: 73Mb L: 2/5 MS: 1 ChangeBit- 00:07:41.394 [2024-07-15 16:18:26.837188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.394 [2024-07-15 16:18:26.837214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.394 [2024-07-15 16:18:26.837268] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.394 [2024-07-15 16:18:26.837282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.394 [2024-07-15 16:18:26.837335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.394 [2024-07-15 16:18:26.837348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.394 [2024-07-15 16:18:26.837400] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.394 [2024-07-15 16:18:26.837413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:41.394 #31 NEW cov: 12125 ft: 15141 corp: 30/81b lim: 5 exec/s: 31 rss: 73Mb L: 4/5 MS: 1 CopyPart- 00:07:41.394 [2024-07-15 16:18:26.887183] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.394 [2024-07-15 16:18:26.887208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.394 [2024-07-15 16:18:26.887278] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.394 [2024-07-15 16:18:26.887292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.394 [2024-07-15 16:18:26.887344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.394 [2024-07-15 16:18:26.887358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.394 #32 NEW cov: 12125 ft: 15170 corp: 31/84b lim: 5 exec/s: 32 rss: 73Mb L: 3/5 MS: 1 InsertByte- 00:07:41.394 [2024-07-15 16:18:26.937469] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.394 [2024-07-15 16:18:26.937494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.394 [2024-07-15 16:18:26.937550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.394 [2024-07-15 16:18:26.937569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.394 [2024-07-15 16:18:26.937621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.394 [2024-07-15 16:18:26.937634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.394 [2024-07-15 16:18:26.937688] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.394 [2024-07-15 16:18:26.937701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:41.394 #33 NEW cov: 12125 ft: 15179 corp: 32/88b lim: 5 exec/s: 33 rss: 73Mb L: 4/5 MS: 1 CMP- DE: "\376\377"- 00:07:41.654 [2024-07-15 16:18:26.977646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.654 [2024-07-15 16:18:26.977670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.654 [2024-07-15 16:18:26.977723] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.654 [2024-07-15 16:18:26.977737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.654 [2024-07-15 16:18:26.977787] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.654 [2024-07-15 16:18:26.977800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.654 [2024-07-15 16:18:26.977850] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.654 [2024-07-15 16:18:26.977863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:41.654 #34 NEW cov: 12125 ft: 15198 corp: 33/92b lim: 5 exec/s: 34 rss: 74Mb L: 4/5 MS: 1 ChangeByte- 00:07:41.654 [2024-07-15 16:18:27.027455] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.654 [2024-07-15 16:18:27.027479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.654 [2024-07-15 16:18:27.027539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.654 [2024-07-15 16:18:27.027553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.654 #35 NEW cov: 12125 ft: 15202 corp: 34/94b lim: 5 exec/s: 35 rss: 74Mb L: 2/5 MS: 1 CrossOver- 00:07:41.654 [2024-07-15 16:18:27.077571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.654 [2024-07-15 16:18:27.077595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.654 [2024-07-15 16:18:27.077651] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.654 [2024-07-15 16:18:27.077664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.654 #36 NEW cov: 12125 ft: 15248 corp: 35/96b lim: 5 exec/s: 36 rss: 74Mb L: 2/5 MS: 1 CopyPart- 00:07:41.654 [2024-07-15 16:18:27.128186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.654 [2024-07-15 16:18:27.128210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.654 [2024-07-15 16:18:27.128265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.654 [2024-07-15 16:18:27.128279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.654 [2024-07-15 16:18:27.128332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.654 [2024-07-15 16:18:27.128345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.654 [2024-07-15 16:18:27.128396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.654 [2024-07-15 16:18:27.128409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:41.654 [2024-07-15 16:18:27.128460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.654 [2024-07-15 16:18:27.128474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:41.654 [2024-07-15 16:18:27.178297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.654 [2024-07-15 16:18:27.178321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.654 [2024-07-15 16:18:27.178375] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.654 [2024-07-15 16:18:27.178389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.654 [2024-07-15 16:18:27.178441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.654 [2024-07-15 16:18:27.178454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.654 [2024-07-15 16:18:27.178505] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.654 [2024-07-15 16:18:27.178518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:41.654 [2024-07-15 16:18:27.178577] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.654 [2024-07-15 16:18:27.178591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:41.654 #38 NEW cov: 12125 ft: 15256 corp: 36/101b lim: 5 exec/s: 38 rss: 74Mb L: 5/5 MS: 2 ChangeBinInt-CopyPart- 00:07:41.654 [2024-07-15 16:18:27.218115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.654 [2024-07-15 16:18:27.218139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.654 [2024-07-15 16:18:27.218192] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.654 [2024-07-15 16:18:27.218210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.654 [2024-07-15 16:18:27.218263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.654 [2024-07-15 16:18:27.218277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.929 #39 NEW cov: 12125 ft: 15267 corp: 37/104b lim: 5 exec/s: 19 rss: 74Mb L: 3/5 MS: 1 CopyPart- 00:07:41.929 #39 DONE cov: 12125 ft: 15267 corp: 37/104b lim: 5 exec/s: 19 rss: 74Mb 00:07:41.929 ###### Recommended dictionary. ###### 00:07:41.929 "\376\377" # Uses: 0 00:07:41.929 ###### End of recommended dictionary. ###### 00:07:41.929 Done 39 runs in 2 second(s) 00:07:41.929 16:18:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_9.conf /var/tmp/suppress_nvmf_fuzz 00:07:41.929 16:18:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:41.929 16:18:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:41.929 16:18:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 10 1 0x1 00:07:41.929 16:18:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=10 00:07:41.929 16:18:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:41.929 16:18:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:41.929 16:18:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:07:41.929 16:18:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_10.conf 00:07:41.929 16:18:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:41.929 16:18:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:41.929 16:18:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 10 00:07:41.929 16:18:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4410 00:07:41.929 16:18:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:07:41.929 16:18:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' 00:07:41.929 16:18:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4410"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:41.929 16:18:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:41.929 16:18:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:41.929 16:18:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' -c /tmp/fuzz_json_10.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 -Z 10 00:07:41.929 [2024-07-15 16:18:27.425048] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:07:41.929 [2024-07-15 16:18:27.425139] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1518340 ] 00:07:41.929 EAL: No free 2048 kB hugepages reported on node 1 00:07:42.188 [2024-07-15 16:18:27.630232] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.188 [2024-07-15 16:18:27.702450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.188 [2024-07-15 16:18:27.761910] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:42.447 [2024-07-15 16:18:27.778101] tcp.c: 993:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4410 *** 00:07:42.447 INFO: Running with entropic power schedule (0xFF, 100). 00:07:42.447 INFO: Seed: 3650695858 00:07:42.447 INFO: Loaded 1 modules (357850 inline 8-bit counters): 357850 [0x29ab30c, 0x2a028e6), 00:07:42.447 INFO: Loaded 1 PC tables (357850 PCs): 357850 [0x2a028e8,0x2f78688), 00:07:42.447 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:07:42.447 INFO: A corpus is not provided, starting from an empty corpus 00:07:42.447 #2 INITED exec/s: 0 rss: 64Mb 00:07:42.447 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:42.447 This may also happen if the target rejected all inputs we tried so far 00:07:42.447 [2024-07-15 16:18:27.827108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.447 [2024-07-15 16:18:27.827137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.447 [2024-07-15 16:18:27.827196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.447 [2024-07-15 16:18:27.827210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.447 [2024-07-15 16:18:27.827265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.447 [2024-07-15 16:18:27.827278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.706 NEW_FUNC[1/697]: 0x490cf0 in fuzz_admin_security_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:205 00:07:42.706 NEW_FUNC[2/697]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:42.706 #3 NEW cov: 11904 ft: 11904 corp: 2/27b lim: 40 exec/s: 0 rss: 72Mb L: 26/26 MS: 1 InsertRepeatedBytes- 00:07:42.706 [2024-07-15 16:18:28.168154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.706 [2024-07-15 16:18:28.168194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.706 [2024-07-15 16:18:28.168254] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:85858585 cdw11:8585f8f8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.706 [2024-07-15 16:18:28.168268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.706 [2024-07-15 16:18:28.168330] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:f8f8f8f8 cdw11:f8f8f8f8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.706 [2024-07-15 16:18:28.168343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.707 [2024-07-15 16:18:28.168399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.707 [2024-07-15 16:18:28.168412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:42.707 #9 NEW cov: 12034 ft: 12960 corp: 3/63b lim: 40 exec/s: 0 rss: 72Mb L: 36/36 MS: 1 InsertRepeatedBytes- 00:07:42.707 [2024-07-15 16:18:28.228149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:85857b7a cdw11:7a738585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.707 [2024-07-15 16:18:28.228177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.707 [2024-07-15 16:18:28.228238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.707 [2024-07-15 16:18:28.228255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.707 [2024-07-15 16:18:28.228311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.707 [2024-07-15 16:18:28.228323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.707 #10 NEW cov: 12040 ft: 13163 corp: 4/89b lim: 40 exec/s: 0 rss: 72Mb L: 26/36 MS: 1 ChangeBinInt- 00:07:42.707 [2024-07-15 16:18:28.268262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:85857b7a cdw11:3a738585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.707 [2024-07-15 16:18:28.268288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.707 [2024-07-15 16:18:28.268349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.707 [2024-07-15 16:18:28.268363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.707 [2024-07-15 16:18:28.268418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.707 [2024-07-15 16:18:28.268432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.966 #11 NEW cov: 12125 ft: 13435 corp: 5/115b lim: 40 exec/s: 0 rss: 72Mb L: 26/36 MS: 1 ChangeBit- 00:07:42.966 [2024-07-15 16:18:28.318492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.966 [2024-07-15 16:18:28.318518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.966 [2024-07-15 16:18:28.318579] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:85858585 cdw11:8585f8f8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.966 [2024-07-15 16:18:28.318594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.966 [2024-07-15 16:18:28.318651] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:f8f8f8f8 cdw11:f8f885f8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.966 [2024-07-15 16:18:28.318664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.967 [2024-07-15 16:18:28.318720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:f8f8f885 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.967 [2024-07-15 16:18:28.318733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:42.967 #12 NEW cov: 12125 ft: 13483 corp: 6/154b lim: 40 exec/s: 0 rss: 72Mb L: 39/39 MS: 1 CopyPart- 00:07:42.967 [2024-07-15 16:18:28.368545] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:85857b7a cdw11:3a738585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.967 [2024-07-15 16:18:28.368572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.967 [2024-07-15 16:18:28.368633] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.967 [2024-07-15 16:18:28.368647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.967 [2024-07-15 16:18:28.368705] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:85858587 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.967 [2024-07-15 16:18:28.368722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.967 #13 NEW cov: 12125 ft: 13522 corp: 7/180b lim: 40 exec/s: 0 rss: 72Mb L: 26/39 MS: 1 ChangeBit- 00:07:42.967 [2024-07-15 16:18:28.418776] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.967 [2024-07-15 16:18:28.418801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.967 [2024-07-15 16:18:28.418858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:85858585 cdw11:8585f8f8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.967 [2024-07-15 16:18:28.418872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.967 [2024-07-15 16:18:28.418930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:f8f8f8f8 cdw11:f8f885f8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.967 [2024-07-15 16:18:28.418944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.967 [2024-07-15 16:18:28.419002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:f8f8f885 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.967 [2024-07-15 16:18:28.419015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:42.967 #14 NEW cov: 12125 ft: 13612 corp: 8/219b lim: 40 exec/s: 0 rss: 72Mb L: 39/39 MS: 1 ChangeBit- 00:07:42.967 [2024-07-15 16:18:28.468692] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.967 [2024-07-15 16:18:28.468717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.967 [2024-07-15 16:18:28.468777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.967 [2024-07-15 16:18:28.468791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.967 #15 NEW cov: 12125 ft: 13972 corp: 9/236b lim: 40 exec/s: 0 rss: 73Mb L: 17/39 MS: 1 EraseBytes- 00:07:42.967 [2024-07-15 16:18:28.509167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.967 [2024-07-15 16:18:28.509192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.967 [2024-07-15 16:18:28.509250] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:85858585 cdw11:8585f8f8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.967 [2024-07-15 16:18:28.509264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.967 [2024-07-15 16:18:28.509321] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:f8f8f8f8 cdw11:85f8f885 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.967 [2024-07-15 16:18:28.509333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.967 [2024-07-15 16:18:28.509390] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:f8f8f8f8 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.967 [2024-07-15 16:18:28.509403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:42.967 [2024-07-15 16:18:28.509459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:85858585 cdw11:8585850a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.967 [2024-07-15 16:18:28.509478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:42.967 #16 NEW cov: 12125 ft: 14046 corp: 10/276b lim: 40 exec/s: 0 rss: 73Mb L: 40/40 MS: 1 CopyPart- 00:07:43.226 [2024-07-15 16:18:28.549018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:85857b7a cdw11:3a738585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.226 [2024-07-15 16:18:28.549043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.226 [2024-07-15 16:18:28.549101] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.226 [2024-07-15 16:18:28.549115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.226 [2024-07-15 16:18:28.549172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:85858e85 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.226 [2024-07-15 16:18:28.549186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.226 #17 NEW cov: 12125 ft: 14203 corp: 11/302b lim: 40 exec/s: 0 rss: 73Mb L: 26/40 MS: 1 ChangeBinInt- 00:07:43.226 [2024-07-15 16:18:28.589161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:85857b7a cdw11:3a738585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.226 [2024-07-15 16:18:28.589186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.226 [2024-07-15 16:18:28.589246] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.226 [2024-07-15 16:18:28.589260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.226 [2024-07-15 16:18:28.589317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.226 [2024-07-15 16:18:28.589330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.226 #18 NEW cov: 12125 ft: 14220 corp: 12/328b lim: 40 exec/s: 0 rss: 73Mb L: 26/40 MS: 1 ShuffleBytes- 00:07:43.226 [2024-07-15 16:18:28.629224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:85857b7a cdw11:3a738585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.226 [2024-07-15 16:18:28.629250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.226 [2024-07-15 16:18:28.629308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.226 [2024-07-15 16:18:28.629323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.226 [2024-07-15 16:18:28.629382] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:0a858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.226 [2024-07-15 16:18:28.629395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.226 #19 NEW cov: 12125 ft: 14304 corp: 13/355b lim: 40 exec/s: 0 rss: 73Mb L: 27/40 MS: 1 CrossOver- 00:07:43.226 [2024-07-15 16:18:28.679410] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:85857b85 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.226 [2024-07-15 16:18:28.679437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.226 [2024-07-15 16:18:28.679500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:7a3a7385 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.226 [2024-07-15 16:18:28.679514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.226 [2024-07-15 16:18:28.679570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:85858585 cdw11:8585858e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.226 [2024-07-15 16:18:28.679584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.226 NEW_FUNC[1/1]: 0x1a7eaf0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:43.226 #20 NEW cov: 12148 ft: 14357 corp: 14/386b lim: 40 exec/s: 0 rss: 73Mb L: 31/40 MS: 1 CrossOver- 00:07:43.226 [2024-07-15 16:18:28.729367] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:8585f8f8 cdw11:f8f8f8f8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.226 [2024-07-15 16:18:28.729393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.226 [2024-07-15 16:18:28.729452] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:f8f88585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.226 [2024-07-15 16:18:28.729466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.226 #21 NEW cov: 12148 ft: 14424 corp: 15/408b lim: 40 exec/s: 0 rss: 73Mb L: 22/40 MS: 1 EraseBytes- 00:07:43.226 [2024-07-15 16:18:28.769624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:8585f8f8 cdw11:f8f8f8f8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.226 [2024-07-15 16:18:28.769649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.226 [2024-07-15 16:18:28.769710] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:f80027a7 cdw11:0574bcdb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.226 [2024-07-15 16:18:28.769724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.226 [2024-07-15 16:18:28.769779] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:b8f88585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.226 [2024-07-15 16:18:28.769793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.486 #22 NEW cov: 12148 ft: 14448 corp: 16/438b lim: 40 exec/s: 0 rss: 73Mb L: 30/40 MS: 1 CMP- DE: "\000'\247\005t\274\333\270"- 00:07:43.486 [2024-07-15 16:18:28.819786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:85857b7a cdw11:3a738526 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.486 [2024-07-15 16:18:28.819812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.486 [2024-07-15 16:18:28.819872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.486 [2024-07-15 16:18:28.819885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.486 [2024-07-15 16:18:28.819944] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:85858e85 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.486 [2024-07-15 16:18:28.819958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.486 #23 NEW cov: 12148 ft: 14472 corp: 17/464b lim: 40 exec/s: 23 rss: 73Mb L: 26/40 MS: 1 ChangeByte- 00:07:43.486 [2024-07-15 16:18:28.859980] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.486 [2024-07-15 16:18:28.860007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.486 [2024-07-15 16:18:28.860064] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:85858585 cdw11:8585f8f8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.486 [2024-07-15 16:18:28.860078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.486 [2024-07-15 16:18:28.860138] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:f8f8f8f8 cdw11:f8f885f8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.486 [2024-07-15 16:18:28.860151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.486 [2024-07-15 16:18:28.860208] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:f8f8f885 cdw11:85857b72 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.486 [2024-07-15 16:18:28.860221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:43.486 #24 NEW cov: 12148 ft: 14504 corp: 18/503b lim: 40 exec/s: 24 rss: 73Mb L: 39/40 MS: 1 ChangeBinInt- 00:07:43.486 [2024-07-15 16:18:28.910024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:85858585 cdw11:b8858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.486 [2024-07-15 16:18:28.910049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.486 [2024-07-15 16:18:28.910107] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.486 [2024-07-15 16:18:28.910121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.486 [2024-07-15 16:18:28.910176] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.486 [2024-07-15 16:18:28.910190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.486 #25 NEW cov: 12148 ft: 14530 corp: 19/529b lim: 40 exec/s: 25 rss: 73Mb L: 26/40 MS: 1 ChangeByte- 00:07:43.486 [2024-07-15 16:18:28.950110] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:8585df7b cdw11:7a3a7385 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.486 [2024-07-15 16:18:28.950136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.486 [2024-07-15 16:18:28.950196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.486 [2024-07-15 16:18:28.950209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.486 [2024-07-15 16:18:28.950266] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.486 [2024-07-15 16:18:28.950280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.486 #26 NEW cov: 12148 ft: 14558 corp: 20/556b lim: 40 exec/s: 26 rss: 73Mb L: 27/40 MS: 1 InsertByte- 00:07:43.486 [2024-07-15 16:18:28.990372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.486 [2024-07-15 16:18:28.990398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.486 [2024-07-15 16:18:28.990463] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:85858585 cdw11:8585f8f8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.486 [2024-07-15 16:18:28.990477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.486 [2024-07-15 16:18:28.990537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:f8f8f8f8 cdw11:f8f885f8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.486 [2024-07-15 16:18:28.990551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.486 [2024-07-15 16:18:28.990607] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:f8858585 cdw11:f8f8f8f8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.486 [2024-07-15 16:18:28.990620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:43.486 #27 NEW cov: 12148 ft: 14573 corp: 21/595b lim: 40 exec/s: 27 rss: 73Mb L: 39/40 MS: 1 CopyPart- 00:07:43.486 [2024-07-15 16:18:29.040637] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.486 [2024-07-15 16:18:29.040662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.486 [2024-07-15 16:18:29.040721] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:85858585 cdw11:8585f8f8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.486 [2024-07-15 16:18:29.040735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.486 [2024-07-15 16:18:29.040792] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:f8f8f8f8 cdw11:9ef8f885 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.486 [2024-07-15 16:18:29.040805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.486 [2024-07-15 16:18:29.040862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:f8f8f8f8 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.486 [2024-07-15 16:18:29.040876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:43.486 [2024-07-15 16:18:29.040932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:85858585 cdw11:85a5850a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.486 [2024-07-15 16:18:29.040946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:43.486 #28 NEW cov: 12148 ft: 14594 corp: 22/635b lim: 40 exec/s: 28 rss: 73Mb L: 40/40 MS: 1 InsertByte- 00:07:43.745 [2024-07-15 16:18:29.080479] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:8585f8f8 cdw11:f8f8f8f8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.746 [2024-07-15 16:18:29.080504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.746 [2024-07-15 16:18:29.080568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:f80027a7 cdw11:05743cdb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.746 [2024-07-15 16:18:29.080582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.746 [2024-07-15 16:18:29.080641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:b8f88585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.746 [2024-07-15 16:18:29.080654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.746 #29 NEW cov: 12148 ft: 14638 corp: 23/665b lim: 40 exec/s: 29 rss: 73Mb L: 30/40 MS: 1 ChangeBit- 00:07:43.746 [2024-07-15 16:18:29.130396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ff26a705 cdw11:a63b4e9c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.746 [2024-07-15 16:18:29.130421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.746 #31 NEW cov: 12148 ft: 15009 corp: 24/675b lim: 40 exec/s: 31 rss: 73Mb L: 10/40 MS: 2 CopyPart-CMP- DE: "\377&\247\005\246;N\234"- 00:07:43.746 [2024-07-15 16:18:29.170900] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.746 [2024-07-15 16:18:29.170925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.746 [2024-07-15 16:18:29.170981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:85858585 cdw11:8585f8f8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.746 [2024-07-15 16:18:29.170995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.746 [2024-07-15 16:18:29.171053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:f8f8f8f8 cdw11:f8f885f8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.746 [2024-07-15 16:18:29.171066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.746 [2024-07-15 16:18:29.171123] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:f8f8f885 cdw11:85857b72 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.746 [2024-07-15 16:18:29.171136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:43.746 #32 NEW cov: 12148 ft: 15025 corp: 25/714b lim: 40 exec/s: 32 rss: 73Mb L: 39/40 MS: 1 ChangeBit- 00:07:43.746 [2024-07-15 16:18:29.210901] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:85857b7a cdw11:3a738585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.746 [2024-07-15 16:18:29.210928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.746 [2024-07-15 16:18:29.210987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:85852485 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.746 [2024-07-15 16:18:29.211002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.746 [2024-07-15 16:18:29.211059] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:8585858e cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.746 [2024-07-15 16:18:29.211072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.746 #33 NEW cov: 12148 ft: 15032 corp: 26/741b lim: 40 exec/s: 33 rss: 73Mb L: 27/40 MS: 1 InsertByte- 00:07:43.746 [2024-07-15 16:18:29.251026] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:85857b7a cdw11:3a738585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.746 [2024-07-15 16:18:29.251050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.746 [2024-07-15 16:18:29.251108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.746 [2024-07-15 16:18:29.251122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.746 [2024-07-15 16:18:29.251178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:85858587 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.746 [2024-07-15 16:18:29.251195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.746 #34 NEW cov: 12148 ft: 15064 corp: 27/767b lim: 40 exec/s: 34 rss: 73Mb L: 26/40 MS: 1 ChangeBinInt- 00:07:43.746 [2024-07-15 16:18:29.301275] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:85857b7a cdw11:3a738526 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.746 [2024-07-15 16:18:29.301300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.746 [2024-07-15 16:18:29.301375] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.746 [2024-07-15 16:18:29.301389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.746 [2024-07-15 16:18:29.301449] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.746 [2024-07-15 16:18:29.301463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.746 [2024-07-15 16:18:29.301521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:8585f8f8 cdw11:8585858e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.746 [2024-07-15 16:18:29.301540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.006 #35 NEW cov: 12148 ft: 15070 corp: 28/806b lim: 40 exec/s: 35 rss: 73Mb L: 39/40 MS: 1 CrossOver- 00:07:44.006 [2024-07-15 16:18:29.351295] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:8585df7b cdw11:7a3a7385 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.006 [2024-07-15 16:18:29.351320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.006 [2024-07-15 16:18:29.351378] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:81858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.006 [2024-07-15 16:18:29.351391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.006 [2024-07-15 16:18:29.351443] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.006 [2024-07-15 16:18:29.351456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.006 #36 NEW cov: 12148 ft: 15093 corp: 29/833b lim: 40 exec/s: 36 rss: 73Mb L: 27/40 MS: 1 ChangeBit- 00:07:44.006 [2024-07-15 16:18:29.401434] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:85738581 cdw11:85857b7a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.006 [2024-07-15 16:18:29.401461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.006 [2024-07-15 16:18:29.401539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:3a738585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.006 [2024-07-15 16:18:29.401553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.006 [2024-07-15 16:18:29.401614] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:85858585 cdw11:85858587 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.006 [2024-07-15 16:18:29.401628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.006 [2024-07-15 16:18:29.451597] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:85738581 cdw11:85857b7a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.006 [2024-07-15 16:18:29.451626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.006 [2024-07-15 16:18:29.451687] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:3a738585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.006 [2024-07-15 16:18:29.451700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.006 [2024-07-15 16:18:29.451759] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:85858587 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.006 [2024-07-15 16:18:29.451772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.006 #38 NEW cov: 12148 ft: 15102 corp: 30/863b lim: 40 exec/s: 38 rss: 73Mb L: 30/40 MS: 2 CrossOver-ShuffleBytes- 00:07:44.006 [2024-07-15 16:18:29.491888] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.006 [2024-07-15 16:18:29.491913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.006 [2024-07-15 16:18:29.491974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:85858585 cdw11:8585f8f8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.006 [2024-07-15 16:18:29.491987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.006 [2024-07-15 16:18:29.492046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:f801f8f8 cdw11:f8f885f8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.006 [2024-07-15 16:18:29.492059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.006 [2024-07-15 16:18:29.492116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:f8f8f885 cdw11:85857b72 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.006 [2024-07-15 16:18:29.492129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.006 #39 NEW cov: 12148 ft: 15108 corp: 31/902b lim: 40 exec/s: 39 rss: 74Mb L: 39/40 MS: 1 ChangeByte- 00:07:44.006 [2024-07-15 16:18:29.541838] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:85857b7a cdw11:3a738585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.006 [2024-07-15 16:18:29.541864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.006 [2024-07-15 16:18:29.541921] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.006 [2024-07-15 16:18:29.541934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.006 [2024-07-15 16:18:29.541988] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:85858e85 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.006 [2024-07-15 16:18:29.542001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.006 #40 NEW cov: 12148 ft: 15110 corp: 32/928b lim: 40 exec/s: 40 rss: 74Mb L: 26/40 MS: 1 ShuffleBytes- 00:07:44.006 [2024-07-15 16:18:29.582238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.006 [2024-07-15 16:18:29.582263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.006 [2024-07-15 16:18:29.582326] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:85858585 cdw11:8585f8f8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.006 [2024-07-15 16:18:29.582340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.006 [2024-07-15 16:18:29.582398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:f8f8f8f8 cdw11:85f8f885 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.006 [2024-07-15 16:18:29.582412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.006 [2024-07-15 16:18:29.582471] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:f8f8f8f8 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.006 [2024-07-15 16:18:29.582484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.006 [2024-07-15 16:18:29.582546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.006 [2024-07-15 16:18:29.582560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:44.264 #41 NEW cov: 12148 ft: 15121 corp: 33/968b lim: 40 exec/s: 41 rss: 74Mb L: 40/40 MS: 1 CrossOver- 00:07:44.265 [2024-07-15 16:18:29.632061] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:859885f8 cdw11:f8f8f8f8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.265 [2024-07-15 16:18:29.632085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.265 [2024-07-15 16:18:29.632143] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:f8f80027 cdw11:a70574bc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.265 [2024-07-15 16:18:29.632156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.265 [2024-07-15 16:18:29.632211] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:dbb8f885 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.265 [2024-07-15 16:18:29.632224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.265 #42 NEW cov: 12148 ft: 15136 corp: 34/999b lim: 40 exec/s: 42 rss: 74Mb L: 31/40 MS: 1 InsertByte- 00:07:44.265 [2024-07-15 16:18:29.672298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:859885f8 cdw11:f8f8f8f8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.265 [2024-07-15 16:18:29.672323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.265 [2024-07-15 16:18:29.672383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:f8f80027 cdw11:a70574bc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.265 [2024-07-15 16:18:29.672396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.265 [2024-07-15 16:18:29.672453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:dbb8f885 cdw11:85854545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.265 [2024-07-15 16:18:29.672466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.265 [2024-07-15 16:18:29.672523] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:45454545 cdw11:45858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.265 [2024-07-15 16:18:29.672541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.265 #43 NEW cov: 12148 ft: 15176 corp: 35/1037b lim: 40 exec/s: 43 rss: 74Mb L: 38/40 MS: 1 InsertRepeatedBytes- 00:07:44.265 [2024-07-15 16:18:29.722278] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:85857b7a cdw11:3a738585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.265 [2024-07-15 16:18:29.722303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.265 [2024-07-15 16:18:29.722360] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.265 [2024-07-15 16:18:29.722374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.265 [2024-07-15 16:18:29.722432] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:85858587 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.265 [2024-07-15 16:18:29.722445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.265 #44 NEW cov: 12148 ft: 15191 corp: 36/1063b lim: 40 exec/s: 44 rss: 74Mb L: 26/40 MS: 1 ShuffleBytes- 00:07:44.265 [2024-07-15 16:18:29.762390] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:85857b7a cdw11:3a738585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.265 [2024-07-15 16:18:29.762414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.265 [2024-07-15 16:18:29.762472] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:85858585 cdw11:85858505 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.265 [2024-07-15 16:18:29.762486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.265 [2024-07-15 16:18:29.762548] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:85858e85 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.265 [2024-07-15 16:18:29.762562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.265 #45 NEW cov: 12148 ft: 15226 corp: 37/1089b lim: 40 exec/s: 45 rss: 74Mb L: 26/40 MS: 1 ChangeBit- 00:07:44.265 [2024-07-15 16:18:29.802775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.265 [2024-07-15 16:18:29.802799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.265 [2024-07-15 16:18:29.802858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:85858585 cdw11:8585f8f8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.265 [2024-07-15 16:18:29.802871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.265 [2024-07-15 16:18:29.802930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:f8f8f8f8 cdw11:85f8f885 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.265 [2024-07-15 16:18:29.802944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.265 [2024-07-15 16:18:29.803003] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:f8f8f8f8 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.265 [2024-07-15 16:18:29.803016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.265 [2024-07-15 16:18:29.803074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:85858585 cdw11:8585850a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.265 [2024-07-15 16:18:29.803087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:44.265 #46 NEW cov: 12148 ft: 15234 corp: 38/1129b lim: 40 exec/s: 23 rss: 74Mb L: 40/40 MS: 1 ShuffleBytes- 00:07:44.265 #46 DONE cov: 12148 ft: 15234 corp: 38/1129b lim: 40 exec/s: 23 rss: 74Mb 00:07:44.265 ###### Recommended dictionary. ###### 00:07:44.265 "\000'\247\005t\274\333\270" # Uses: 0 00:07:44.265 "\377&\247\005\246;N\234" # Uses: 0 00:07:44.265 ###### End of recommended dictionary. ###### 00:07:44.265 Done 46 runs in 2 second(s) 00:07:44.524 16:18:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_10.conf /var/tmp/suppress_nvmf_fuzz 00:07:44.524 16:18:29 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:44.524 16:18:29 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:44.524 16:18:29 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 11 1 0x1 00:07:44.524 16:18:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=11 00:07:44.524 16:18:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:44.524 16:18:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:44.524 16:18:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:07:44.524 16:18:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_11.conf 00:07:44.524 16:18:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:44.524 16:18:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:44.524 16:18:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 11 00:07:44.524 16:18:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4411 00:07:44.524 16:18:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:07:44.524 16:18:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' 00:07:44.524 16:18:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4411"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:44.524 16:18:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:44.524 16:18:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:44.524 16:18:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' -c /tmp/fuzz_json_11.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 -Z 11 00:07:44.524 [2024-07-15 16:18:30.007734] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:07:44.524 [2024-07-15 16:18:30.007810] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1518644 ] 00:07:44.524 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.782 [2024-07-15 16:18:30.208406] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.782 [2024-07-15 16:18:30.282831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.782 [2024-07-15 16:18:30.342940] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:44.782 [2024-07-15 16:18:30.359157] tcp.c: 993:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4411 *** 00:07:45.041 INFO: Running with entropic power schedule (0xFF, 100). 00:07:45.041 INFO: Seed: 1937754150 00:07:45.041 INFO: Loaded 1 modules (357850 inline 8-bit counters): 357850 [0x29ab30c, 0x2a028e6), 00:07:45.041 INFO: Loaded 1 PC tables (357850 PCs): 357850 [0x2a028e8,0x2f78688), 00:07:45.041 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:07:45.041 INFO: A corpus is not provided, starting from an empty corpus 00:07:45.041 #2 INITED exec/s: 0 rss: 65Mb 00:07:45.041 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:45.041 This may also happen if the target rejected all inputs we tried so far 00:07:45.041 [2024-07-15 16:18:30.414683] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.041 [2024-07-15 16:18:30.414715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.041 [2024-07-15 16:18:30.414775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.041 [2024-07-15 16:18:30.414789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.300 NEW_FUNC[1/697]: 0x492a60 in fuzz_admin_security_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:223 00:07:45.300 NEW_FUNC[2/697]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:45.300 #9 NEW cov: 11910 ft: 11911 corp: 2/18b lim: 40 exec/s: 0 rss: 71Mb L: 17/17 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:07:45.300 [2024-07-15 16:18:30.757823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a0000ec cdw11:ecececec SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.300 [2024-07-15 16:18:30.757871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.301 [2024-07-15 16:18:30.757987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ecececec cdw11:ecececec SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.301 [2024-07-15 16:18:30.758008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.301 [2024-07-15 16:18:30.758111] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ecececec cdw11:ecec0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.301 [2024-07-15 16:18:30.758132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.301 [2024-07-15 16:18:30.758231] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.301 [2024-07-15 16:18:30.758254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.301 NEW_FUNC[1/1]: 0x13612e0 in nvmf_tcp_poll_group_poll /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/tcp.c:3403 00:07:45.301 #10 NEW cov: 12041 ft: 12766 corp: 3/54b lim: 40 exec/s: 0 rss: 71Mb L: 36/36 MS: 1 InsertRepeatedBytes- 00:07:45.301 [2024-07-15 16:18:30.827989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a0000ec cdw11:ecececec SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.301 [2024-07-15 16:18:30.828023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.301 [2024-07-15 16:18:30.828133] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ecececec cdw11:ecececec SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.301 [2024-07-15 16:18:30.828148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.301 [2024-07-15 16:18:30.828242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ecececec cdw11:ecec0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.301 [2024-07-15 16:18:30.828258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.301 [2024-07-15 16:18:30.828345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000020 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.301 [2024-07-15 16:18:30.828361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.301 #11 NEW cov: 12052 ft: 13128 corp: 4/90b lim: 40 exec/s: 0 rss: 71Mb L: 36/36 MS: 1 ChangeBit- 00:07:45.560 [2024-07-15 16:18:30.888317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a0000ec cdw11:ecececec SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.560 [2024-07-15 16:18:30.888346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.560 [2024-07-15 16:18:30.888445] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ecececec cdw11:ecececec SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.560 [2024-07-15 16:18:30.888460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.560 [2024-07-15 16:18:30.888562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ecececec cdw11:ecececec SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.560 [2024-07-15 16:18:30.888580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.560 [2024-07-15 16:18:30.888679] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ec000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.560 [2024-07-15 16:18:30.888694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.560 #12 NEW cov: 12137 ft: 13424 corp: 5/129b lim: 40 exec/s: 0 rss: 72Mb L: 39/39 MS: 1 CopyPart- 00:07:45.560 [2024-07-15 16:18:30.957910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.560 [2024-07-15 16:18:30.957936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.560 [2024-07-15 16:18:30.958026] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.560 [2024-07-15 16:18:30.958043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.560 #13 NEW cov: 12137 ft: 13515 corp: 6/146b lim: 40 exec/s: 0 rss: 72Mb L: 17/39 MS: 1 ShuffleBytes- 00:07:45.560 [2024-07-15 16:18:31.008829] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a0a0000 cdw11:ecececec SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.560 [2024-07-15 16:18:31.008855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.560 [2024-07-15 16:18:31.008954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ecececec cdw11:ecececec SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.560 [2024-07-15 16:18:31.008971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.560 [2024-07-15 16:18:31.009069] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ecececec cdw11:ecececec SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.560 [2024-07-15 16:18:31.009085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.560 [2024-07-15 16:18:31.009192] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ecec0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.560 [2024-07-15 16:18:31.009207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.560 #14 NEW cov: 12137 ft: 13673 corp: 7/183b lim: 40 exec/s: 0 rss: 72Mb L: 37/39 MS: 1 CrossOver- 00:07:45.560 [2024-07-15 16:18:31.058176] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a040000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.560 [2024-07-15 16:18:31.058207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.560 [2024-07-15 16:18:31.058299] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.560 [2024-07-15 16:18:31.058315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.560 #15 NEW cov: 12137 ft: 13721 corp: 8/200b lim: 40 exec/s: 0 rss: 72Mb L: 17/39 MS: 1 ChangeBinInt- 00:07:45.560 [2024-07-15 16:18:31.108547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:310a0400 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.560 [2024-07-15 16:18:31.108573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.560 [2024-07-15 16:18:31.108678] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.560 [2024-07-15 16:18:31.108694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.819 #16 NEW cov: 12137 ft: 13813 corp: 9/218b lim: 40 exec/s: 0 rss: 72Mb L: 18/39 MS: 1 InsertByte- 00:07:45.819 [2024-07-15 16:18:31.169504] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a0000ec cdw11:ecececec SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.819 [2024-07-15 16:18:31.169533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.819 [2024-07-15 16:18:31.169641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ecececec cdw11:ecececec SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.819 [2024-07-15 16:18:31.169656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.819 [2024-07-15 16:18:31.169754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ecececec cdw11:ecec0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.819 [2024-07-15 16:18:31.169769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.819 [2024-07-15 16:18:31.169865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:24000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.819 [2024-07-15 16:18:31.169882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.819 #17 NEW cov: 12137 ft: 13834 corp: 10/254b lim: 40 exec/s: 0 rss: 72Mb L: 36/39 MS: 1 ChangeBinInt- 00:07:45.819 [2024-07-15 16:18:31.218596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.819 [2024-07-15 16:18:31.218621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.819 #18 NEW cov: 12137 ft: 14588 corp: 11/265b lim: 40 exec/s: 0 rss: 72Mb L: 11/39 MS: 1 EraseBytes- 00:07:45.819 [2024-07-15 16:18:31.269848] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a0a0000 cdw11:ecececec SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.819 [2024-07-15 16:18:31.269874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.819 [2024-07-15 16:18:31.269967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ecececec cdw11:ecececec SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.819 [2024-07-15 16:18:31.269984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.819 [2024-07-15 16:18:31.270075] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ecececec cdw11:ecececec SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.819 [2024-07-15 16:18:31.270093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.819 [2024-07-15 16:18:31.270204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ecec0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.819 [2024-07-15 16:18:31.270223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.819 NEW_FUNC[1/1]: 0x1a7eaf0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:45.819 #19 NEW cov: 12160 ft: 14659 corp: 12/302b lim: 40 exec/s: 0 rss: 72Mb L: 37/39 MS: 1 ShuffleBytes- 00:07:45.819 [2024-07-15 16:18:31.339579] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:0f000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.819 [2024-07-15 16:18:31.339606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.819 [2024-07-15 16:18:31.339708] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.819 [2024-07-15 16:18:31.339725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.819 #20 NEW cov: 12160 ft: 14708 corp: 13/323b lim: 40 exec/s: 0 rss: 72Mb L: 21/39 MS: 1 CMP- DE: "\377\377\377\017"- 00:07:45.819 [2024-07-15 16:18:31.389803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a040000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.819 [2024-07-15 16:18:31.389830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.819 [2024-07-15 16:18:31.389929] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.819 [2024-07-15 16:18:31.389945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.079 #21 NEW cov: 12160 ft: 14732 corp: 14/340b lim: 40 exec/s: 21 rss: 72Mb L: 17/39 MS: 1 ChangeByte- 00:07:46.079 [2024-07-15 16:18:31.439729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:26ff26a7 cdw11:06e06614 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.079 [2024-07-15 16:18:31.439756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.079 #23 NEW cov: 12160 ft: 14771 corp: 15/349b lim: 40 exec/s: 23 rss: 72Mb L: 9/39 MS: 2 ChangeByte-CMP- DE: "\377&\247\006\340f\024\250"- 00:07:46.079 [2024-07-15 16:18:31.491056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a0000ec cdw11:ecececec SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.079 [2024-07-15 16:18:31.491084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.079 [2024-07-15 16:18:31.491181] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ecececec cdw11:ecececec SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.079 [2024-07-15 16:18:31.491197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.079 [2024-07-15 16:18:31.491289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ecececec cdw11:ecec0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.079 [2024-07-15 16:18:31.491306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.079 [2024-07-15 16:18:31.491389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000020 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.079 [2024-07-15 16:18:31.491409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:46.079 #24 NEW cov: 12160 ft: 14806 corp: 16/385b lim: 40 exec/s: 24 rss: 72Mb L: 36/39 MS: 1 ChangeBit- 00:07:46.079 [2024-07-15 16:18:31.540403] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0f000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.079 [2024-07-15 16:18:31.540432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.079 [2024-07-15 16:18:31.540532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.079 [2024-07-15 16:18:31.540549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.079 #25 NEW cov: 12160 ft: 14833 corp: 17/402b lim: 40 exec/s: 25 rss: 72Mb L: 17/39 MS: 1 CrossOver- 00:07:46.079 [2024-07-15 16:18:31.590601] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a040000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.079 [2024-07-15 16:18:31.590628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.079 [2024-07-15 16:18:31.590722] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.079 [2024-07-15 16:18:31.590739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.079 #26 NEW cov: 12160 ft: 14846 corp: 18/419b lim: 40 exec/s: 26 rss: 72Mb L: 17/39 MS: 1 CopyPart- 00:07:46.079 [2024-07-15 16:18:31.650887] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a040000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.079 [2024-07-15 16:18:31.650913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.079 [2024-07-15 16:18:31.651013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:0000000c cdw11:00400000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.079 [2024-07-15 16:18:31.651028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.339 #27 NEW cov: 12160 ft: 14848 corp: 19/436b lim: 40 exec/s: 27 rss: 72Mb L: 17/39 MS: 1 ChangeBit- 00:07:46.339 [2024-07-15 16:18:31.701048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.339 [2024-07-15 16:18:31.701073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.339 [2024-07-15 16:18:31.701168] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:25000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.339 [2024-07-15 16:18:31.701184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.339 #28 NEW cov: 12160 ft: 14849 corp: 20/453b lim: 40 exec/s: 28 rss: 72Mb L: 17/39 MS: 1 ChangeByte- 00:07:46.339 [2024-07-15 16:18:31.751942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a0000ec cdw11:ecececec SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.339 [2024-07-15 16:18:31.751967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.339 [2024-07-15 16:18:31.752053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ecececec cdw11:ecececec SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.339 [2024-07-15 16:18:31.752071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.339 [2024-07-15 16:18:31.752164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ecececec cdw11:ecec0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.339 [2024-07-15 16:18:31.752181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.339 [2024-07-15 16:18:31.752278] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:000000ec cdw11:ecec0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.339 [2024-07-15 16:18:31.752294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:46.339 #29 NEW cov: 12160 ft: 14898 corp: 21/492b lim: 40 exec/s: 29 rss: 72Mb L: 39/39 MS: 1 CopyPart- 00:07:46.339 [2024-07-15 16:18:31.812137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a0000ec cdw11:ecececec SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.339 [2024-07-15 16:18:31.812163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.339 [2024-07-15 16:18:31.812254] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ecececec cdw11:ec9decec SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.339 [2024-07-15 16:18:31.812271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.340 [2024-07-15 16:18:31.812356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ecececec cdw11:ecec0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.340 [2024-07-15 16:18:31.812372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.340 [2024-07-15 16:18:31.812479] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:000000ec cdw11:ecec0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.340 [2024-07-15 16:18:31.812495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:46.340 #30 NEW cov: 12160 ft: 14939 corp: 22/531b lim: 40 exec/s: 30 rss: 72Mb L: 39/39 MS: 1 ChangeByte- 00:07:46.340 [2024-07-15 16:18:31.871747] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a041169 cdw11:d01b07a7 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.340 [2024-07-15 16:18:31.871772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.340 [2024-07-15 16:18:31.871866] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:2700000c cdw11:00400000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.340 [2024-07-15 16:18:31.871882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.340 #31 NEW cov: 12160 ft: 14962 corp: 23/548b lim: 40 exec/s: 31 rss: 72Mb L: 17/39 MS: 1 CMP- DE: "\021i\320\033\007\247'\000"- 00:07:46.599 [2024-07-15 16:18:31.932809] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a0000ec cdw11:ecececec SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.599 [2024-07-15 16:18:31.932837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.599 [2024-07-15 16:18:31.932934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ecb3ecec cdw11:ecececec SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.599 [2024-07-15 16:18:31.932952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.599 [2024-07-15 16:18:31.933048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ecececec cdw11:ecec0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.599 [2024-07-15 16:18:31.933066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.599 [2024-07-15 16:18:31.933158] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:000000ec cdw11:ecec0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.599 [2024-07-15 16:18:31.933175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:46.599 #32 NEW cov: 12160 ft: 14977 corp: 24/587b lim: 40 exec/s: 32 rss: 72Mb L: 39/39 MS: 1 ChangeByte- 00:07:46.599 [2024-07-15 16:18:31.982225] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a040000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.599 [2024-07-15 16:18:31.982251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.599 [2024-07-15 16:18:31.982345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:000c0000 cdw11:00400000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.599 [2024-07-15 16:18:31.982361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.599 #33 NEW cov: 12160 ft: 14998 corp: 25/604b lim: 40 exec/s: 33 rss: 72Mb L: 17/39 MS: 1 ShuffleBytes- 00:07:46.599 [2024-07-15 16:18:32.033098] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a0000ec cdw11:ecececec SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.599 [2024-07-15 16:18:32.033124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.599 [2024-07-15 16:18:32.033211] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:d0ececec cdw11:ec9decec SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.599 [2024-07-15 16:18:32.033228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.599 [2024-07-15 16:18:32.033320] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ecececec cdw11:ecec0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.599 [2024-07-15 16:18:32.033337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.599 [2024-07-15 16:18:32.033426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:000000ec cdw11:ecec0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.599 [2024-07-15 16:18:32.033442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:46.599 #34 NEW cov: 12160 ft: 15009 corp: 26/643b lim: 40 exec/s: 34 rss: 72Mb L: 39/39 MS: 1 ChangeByte- 00:07:46.599 [2024-07-15 16:18:32.092629] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:210a0400 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.599 [2024-07-15 16:18:32.092655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.599 [2024-07-15 16:18:32.092769] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.599 [2024-07-15 16:18:32.092786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.599 #35 NEW cov: 12160 ft: 15015 corp: 27/661b lim: 40 exec/s: 35 rss: 72Mb L: 18/39 MS: 1 ChangeBit- 00:07:46.599 [2024-07-15 16:18:32.152832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a041169 cdw11:d01b07a7 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.599 [2024-07-15 16:18:32.152858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.599 [2024-07-15 16:18:32.152952] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:041169d0 cdw11:1b07a727 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.599 [2024-07-15 16:18:32.152973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.858 #36 NEW cov: 12160 ft: 15028 corp: 28/678b lim: 40 exec/s: 36 rss: 73Mb L: 17/39 MS: 1 CopyPart- 00:07:46.858 [2024-07-15 16:18:32.213453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a040000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.858 [2024-07-15 16:18:32.213480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.858 [2024-07-15 16:18:32.213589] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:0000000c cdw11:0000ff26 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.858 [2024-07-15 16:18:32.213606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.858 [2024-07-15 16:18:32.213705] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:a706e066 cdw11:14a80000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.858 [2024-07-15 16:18:32.213723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.858 #37 NEW cov: 12160 ft: 15251 corp: 29/703b lim: 40 exec/s: 37 rss: 73Mb L: 25/39 MS: 1 PersAutoDict- DE: "\377&\247\006\340f\024\250"- 00:07:46.858 [2024-07-15 16:18:32.263374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:310a0400 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.858 [2024-07-15 16:18:32.263401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.858 [2024-07-15 16:18:32.263498] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00e00000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.858 [2024-07-15 16:18:32.263515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.858 #38 NEW cov: 12160 ft: 15273 corp: 30/721b lim: 40 exec/s: 38 rss: 73Mb L: 18/39 MS: 1 CrossOver- 00:07:46.858 [2024-07-15 16:18:32.313598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a041169 cdw11:d01b07a7 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.858 [2024-07-15 16:18:32.313623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.858 [2024-07-15 16:18:32.313718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:02020202 cdw11:02020411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.858 [2024-07-15 16:18:32.313734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.858 #39 NEW cov: 12160 ft: 15282 corp: 31/744b lim: 40 exec/s: 39 rss: 73Mb L: 23/39 MS: 1 InsertRepeatedBytes- 00:07:46.858 [2024-07-15 16:18:32.374577] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a0000ec cdw11:ecececec SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.858 [2024-07-15 16:18:32.374603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.858 [2024-07-15 16:18:32.374701] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ecececec cdw11:ecececec SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.858 [2024-07-15 16:18:32.374718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.858 [2024-07-15 16:18:32.374814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ecececec cdw11:ecececec SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.858 [2024-07-15 16:18:32.374830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.859 [2024-07-15 16:18:32.374941] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ecec0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.859 [2024-07-15 16:18:32.374959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:46.859 #40 NEW cov: 12160 ft: 15332 corp: 32/780b lim: 40 exec/s: 20 rss: 73Mb L: 36/39 MS: 1 CopyPart- 00:07:46.859 #40 DONE cov: 12160 ft: 15332 corp: 32/780b lim: 40 exec/s: 20 rss: 73Mb 00:07:46.859 ###### Recommended dictionary. ###### 00:07:46.859 "\377\377\377\017" # Uses: 0 00:07:46.859 "\377&\247\006\340f\024\250" # Uses: 1 00:07:46.859 "\021i\320\033\007\247'\000" # Uses: 0 00:07:46.859 ###### End of recommended dictionary. ###### 00:07:46.859 Done 40 runs in 2 second(s) 00:07:47.116 16:18:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_11.conf /var/tmp/suppress_nvmf_fuzz 00:07:47.116 16:18:32 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:47.116 16:18:32 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:47.116 16:18:32 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 12 1 0x1 00:07:47.116 16:18:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=12 00:07:47.116 16:18:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:47.116 16:18:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:47.116 16:18:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:07:47.116 16:18:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_12.conf 00:07:47.116 16:18:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:47.116 16:18:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:47.116 16:18:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 12 00:07:47.116 16:18:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4412 00:07:47.116 16:18:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:07:47.116 16:18:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' 00:07:47.116 16:18:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4412"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:47.116 16:18:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:47.116 16:18:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:47.116 16:18:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' -c /tmp/fuzz_json_12.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 -Z 12 00:07:47.117 [2024-07-15 16:18:32.582759] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:07:47.117 [2024-07-15 16:18:32.582851] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1519006 ] 00:07:47.117 EAL: No free 2048 kB hugepages reported on node 1 00:07:47.375 [2024-07-15 16:18:32.787074] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.375 [2024-07-15 16:18:32.860742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.375 [2024-07-15 16:18:32.920436] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:47.375 [2024-07-15 16:18:32.936642] tcp.c: 993:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4412 *** 00:07:47.375 INFO: Running with entropic power schedule (0xFF, 100). 00:07:47.375 INFO: Seed: 218781055 00:07:47.632 INFO: Loaded 1 modules (357850 inline 8-bit counters): 357850 [0x29ab30c, 0x2a028e6), 00:07:47.632 INFO: Loaded 1 PC tables (357850 PCs): 357850 [0x2a028e8,0x2f78688), 00:07:47.632 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:07:47.632 INFO: A corpus is not provided, starting from an empty corpus 00:07:47.632 #2 INITED exec/s: 0 rss: 64Mb 00:07:47.632 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:47.632 This may also happen if the target rejected all inputs we tried so far 00:07:47.632 [2024-07-15 16:18:32.995358] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0affff1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.632 [2024-07-15 16:18:32.995389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.632 [2024-07-15 16:18:32.995461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.632 [2024-07-15 16:18:32.995476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.632 [2024-07-15 16:18:32.995535] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.632 [2024-07-15 16:18:32.995549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.889 NEW_FUNC[1/698]: 0x4947d0 in fuzz_admin_directive_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:241 00:07:47.889 NEW_FUNC[2/698]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:47.889 #6 NEW cov: 11914 ft: 11914 corp: 2/29b lim: 40 exec/s: 0 rss: 72Mb L: 28/28 MS: 4 ShuffleBytes-ShuffleBytes-CMP-InsertRepeatedBytes- DE: "\377\377"- 00:07:47.889 [2024-07-15 16:18:33.336157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a010000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.889 [2024-07-15 16:18:33.336219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.889 #7 NEW cov: 12044 ft: 13342 corp: 3/38b lim: 40 exec/s: 0 rss: 72Mb L: 9/28 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\000"- 00:07:47.889 [2024-07-15 16:18:33.386323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0affff1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.889 [2024-07-15 16:18:33.386349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.889 [2024-07-15 16:18:33.386423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:1c1c1c1e cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.889 [2024-07-15 16:18:33.386437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.889 [2024-07-15 16:18:33.386489] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.889 [2024-07-15 16:18:33.386503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.889 #8 NEW cov: 12050 ft: 13548 corp: 4/66b lim: 40 exec/s: 0 rss: 72Mb L: 28/28 MS: 1 ChangeBit- 00:07:47.889 [2024-07-15 16:18:33.436599] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0affff1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.889 [2024-07-15 16:18:33.436623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.889 [2024-07-15 16:18:33.436697] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.889 [2024-07-15 16:18:33.436714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.889 [2024-07-15 16:18:33.436768] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:1c434343 cdw11:43434343 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.889 [2024-07-15 16:18:33.436781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.889 [2024-07-15 16:18:33.436835] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:431c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.889 [2024-07-15 16:18:33.436848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:47.889 #9 NEW cov: 12135 ft: 14062 corp: 5/102b lim: 40 exec/s: 0 rss: 72Mb L: 36/36 MS: 1 InsertRepeatedBytes- 00:07:48.146 [2024-07-15 16:18:33.476263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:09000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.146 [2024-07-15 16:18:33.476289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.146 #10 NEW cov: 12135 ft: 14278 corp: 6/111b lim: 40 exec/s: 0 rss: 72Mb L: 9/36 MS: 1 ChangeBinInt- 00:07:48.146 [2024-07-15 16:18:33.526584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a00004c cdw11:219d2f08 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.146 [2024-07-15 16:18:33.526609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.146 [2024-07-15 16:18:33.526664] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:a7270000 cdw11:09000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.146 [2024-07-15 16:18:33.526677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.146 #11 NEW cov: 12135 ft: 14538 corp: 7/128b lim: 40 exec/s: 0 rss: 72Mb L: 17/36 MS: 1 CMP- DE: "L!\235/\010\247'\000"- 00:07:48.146 [2024-07-15 16:18:33.576854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0affff1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.146 [2024-07-15 16:18:33.576879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.146 [2024-07-15 16:18:33.576948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.146 [2024-07-15 16:18:33.576962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.146 [2024-07-15 16:18:33.577014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.146 [2024-07-15 16:18:33.577028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.146 #12 NEW cov: 12135 ft: 14691 corp: 8/156b lim: 40 exec/s: 0 rss: 72Mb L: 28/36 MS: 1 ChangeBinInt- 00:07:48.146 [2024-07-15 16:18:33.616920] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a01000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.146 [2024-07-15 16:18:33.616944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.146 [2024-07-15 16:18:33.617014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0000004c cdw11:219d2f08 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.146 [2024-07-15 16:18:33.617029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.146 [2024-07-15 16:18:33.617083] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:a7270000 cdw11:00090000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.146 [2024-07-15 16:18:33.617100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.146 #13 NEW cov: 12135 ft: 14788 corp: 9/182b lim: 40 exec/s: 0 rss: 72Mb L: 26/36 MS: 1 CrossOver- 00:07:48.146 [2024-07-15 16:18:33.656744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a010000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.146 [2024-07-15 16:18:33.656768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.146 #19 NEW cov: 12135 ft: 14872 corp: 10/191b lim: 40 exec/s: 0 rss: 72Mb L: 9/36 MS: 1 ChangeBit- 00:07:48.146 [2024-07-15 16:18:33.696853] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a010000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.146 [2024-07-15 16:18:33.696877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.404 #20 NEW cov: 12135 ft: 14968 corp: 11/200b lim: 40 exec/s: 0 rss: 72Mb L: 9/36 MS: 1 ShuffleBytes- 00:07:48.404 [2024-07-15 16:18:33.747343] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0affff1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.404 [2024-07-15 16:18:33.747367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.404 [2024-07-15 16:18:33.747436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.404 [2024-07-15 16:18:33.747451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.404 [2024-07-15 16:18:33.747502] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.404 [2024-07-15 16:18:33.747516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.404 #21 NEW cov: 12135 ft: 14994 corp: 12/228b lim: 40 exec/s: 0 rss: 73Mb L: 28/36 MS: 1 ShuffleBytes- 00:07:48.404 [2024-07-15 16:18:33.797471] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0affff1c cdw11:321c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.404 [2024-07-15 16:18:33.797496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.404 [2024-07-15 16:18:33.797553] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:1c1c1c1e cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.404 [2024-07-15 16:18:33.797568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.404 [2024-07-15 16:18:33.797622] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.404 [2024-07-15 16:18:33.797636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.404 #22 NEW cov: 12135 ft: 15009 corp: 13/256b lim: 40 exec/s: 0 rss: 73Mb L: 28/36 MS: 1 ChangeByte- 00:07:48.404 [2024-07-15 16:18:33.847643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0affff1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.404 [2024-07-15 16:18:33.847668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.404 [2024-07-15 16:18:33.847722] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.404 [2024-07-15 16:18:33.847738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.404 [2024-07-15 16:18:33.847791] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.404 [2024-07-15 16:18:33.847806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.404 NEW_FUNC[1/1]: 0x1a7eaf0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:48.404 #23 NEW cov: 12158 ft: 15074 corp: 14/284b lim: 40 exec/s: 0 rss: 73Mb L: 28/36 MS: 1 ShuffleBytes- 00:07:48.404 [2024-07-15 16:18:33.897443] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.404 [2024-07-15 16:18:33.897468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.404 #28 NEW cov: 12158 ft: 15106 corp: 15/296b lim: 40 exec/s: 0 rss: 73Mb L: 12/36 MS: 5 EraseBytes-InsertByte-EraseBytes-ChangeBit-PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:07:48.404 [2024-07-15 16:18:33.937693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a00004c cdw11:219d2f08 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.404 [2024-07-15 16:18:33.937718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.404 [2024-07-15 16:18:33.937774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:a7270001 cdw11:00000400 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.404 [2024-07-15 16:18:33.937787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.404 #29 NEW cov: 12158 ft: 15141 corp: 16/317b lim: 40 exec/s: 0 rss: 73Mb L: 21/36 MS: 1 CMP- DE: "\001\000\000\004"- 00:07:48.663 [2024-07-15 16:18:33.988189] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:090000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.663 [2024-07-15 16:18:33.988215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.663 [2024-07-15 16:18:33.988280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.663 [2024-07-15 16:18:33.988295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.663 [2024-07-15 16:18:33.988347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.663 [2024-07-15 16:18:33.988361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.663 [2024-07-15 16:18:33.988411] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.663 [2024-07-15 16:18:33.988424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:48.663 #30 NEW cov: 12158 ft: 15170 corp: 17/349b lim: 40 exec/s: 30 rss: 73Mb L: 32/36 MS: 1 InsertRepeatedBytes- 00:07:48.663 [2024-07-15 16:18:34.028090] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0affff1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.663 [2024-07-15 16:18:34.028116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.663 [2024-07-15 16:18:34.028169] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.663 [2024-07-15 16:18:34.028189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.663 [2024-07-15 16:18:34.028241] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.663 [2024-07-15 16:18:34.028255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.663 #31 NEW cov: 12158 ft: 15195 corp: 18/377b lim: 40 exec/s: 31 rss: 73Mb L: 28/36 MS: 1 ShuffleBytes- 00:07:48.663 [2024-07-15 16:18:34.068385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0affff1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.663 [2024-07-15 16:18:34.068411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.663 [2024-07-15 16:18:34.068464] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1e1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.663 [2024-07-15 16:18:34.068477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.663 [2024-07-15 16:18:34.068533] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:1c434343 cdw11:43434343 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.663 [2024-07-15 16:18:34.068547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.663 [2024-07-15 16:18:34.068601] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:431c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.663 [2024-07-15 16:18:34.068614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:48.663 #32 NEW cov: 12158 ft: 15208 corp: 19/413b lim: 40 exec/s: 32 rss: 73Mb L: 36/36 MS: 1 ChangeBinInt- 00:07:48.663 [2024-07-15 16:18:34.118347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0affff1c cdw11:321c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.663 [2024-07-15 16:18:34.118373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.663 [2024-07-15 16:18:34.118426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:1c1c1c1e cdw11:1c1c241c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.663 [2024-07-15 16:18:34.118440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.663 [2024-07-15 16:18:34.118493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.663 [2024-07-15 16:18:34.118507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.663 #33 NEW cov: 12158 ft: 15277 corp: 20/441b lim: 40 exec/s: 33 rss: 73Mb L: 28/36 MS: 1 ChangeBinInt- 00:07:48.663 [2024-07-15 16:18:34.168379] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0affff1c cdw11:321c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.663 [2024-07-15 16:18:34.168406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.663 [2024-07-15 16:18:34.168458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:1c1c1c1e cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.663 [2024-07-15 16:18:34.168472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.663 #34 NEW cov: 12158 ft: 15288 corp: 21/463b lim: 40 exec/s: 34 rss: 73Mb L: 22/36 MS: 1 EraseBytes- 00:07:48.663 [2024-07-15 16:18:34.218828] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0affff1c cdw11:321c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.663 [2024-07-15 16:18:34.218858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.663 [2024-07-15 16:18:34.218910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:1c1c1c1e cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.663 [2024-07-15 16:18:34.218924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.663 [2024-07-15 16:18:34.218976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.663 [2024-07-15 16:18:34.218990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.663 [2024-07-15 16:18:34.219044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.663 [2024-07-15 16:18:34.219058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:48.921 #35 NEW cov: 12158 ft: 15302 corp: 22/497b lim: 40 exec/s: 35 rss: 73Mb L: 34/36 MS: 1 CopyPart- 00:07:48.921 [2024-07-15 16:18:34.258777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0affff1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.921 [2024-07-15 16:18:34.258803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.921 [2024-07-15 16:18:34.258856] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.921 [2024-07-15 16:18:34.258870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.921 [2024-07-15 16:18:34.258921] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.921 [2024-07-15 16:18:34.258935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.921 #36 NEW cov: 12158 ft: 15331 corp: 23/522b lim: 40 exec/s: 36 rss: 73Mb L: 25/36 MS: 1 EraseBytes- 00:07:48.921 [2024-07-15 16:18:34.298868] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0affff1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.921 [2024-07-15 16:18:34.298894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.921 [2024-07-15 16:18:34.298947] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:1d1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.921 [2024-07-15 16:18:34.298963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.921 [2024-07-15 16:18:34.299015] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.921 [2024-07-15 16:18:34.299029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.921 #37 NEW cov: 12158 ft: 15391 corp: 24/550b lim: 40 exec/s: 37 rss: 73Mb L: 28/36 MS: 1 ChangeBinInt- 00:07:48.921 [2024-07-15 16:18:34.338689] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a010000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.921 [2024-07-15 16:18:34.338716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.921 #38 NEW cov: 12158 ft: 15412 corp: 25/559b lim: 40 exec/s: 38 rss: 73Mb L: 9/36 MS: 1 CrossOver- 00:07:48.921 [2024-07-15 16:18:34.378940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a01000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.921 [2024-07-15 16:18:34.378967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.921 [2024-07-15 16:18:34.379021] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0000004c cdw11:219d2f08 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.921 [2024-07-15 16:18:34.379036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.921 #39 NEW cov: 12158 ft: 15428 corp: 26/582b lim: 40 exec/s: 39 rss: 73Mb L: 23/36 MS: 1 EraseBytes- 00:07:48.921 [2024-07-15 16:18:34.429557] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0affff1c cdw11:1c282828 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.921 [2024-07-15 16:18:34.429583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.921 [2024-07-15 16:18:34.429639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:281c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.921 [2024-07-15 16:18:34.429653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.921 [2024-07-15 16:18:34.429705] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:1c1c1c1c cdw11:1c434343 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.921 [2024-07-15 16:18:34.429720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.921 [2024-07-15 16:18:34.429773] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:43434343 cdw11:431c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.921 [2024-07-15 16:18:34.429787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:48.921 [2024-07-15 16:18:34.429842] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.921 [2024-07-15 16:18:34.429855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:48.921 #40 NEW cov: 12158 ft: 15494 corp: 27/622b lim: 40 exec/s: 40 rss: 73Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:07:48.921 [2024-07-15 16:18:34.469368] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0affff1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.921 [2024-07-15 16:18:34.469396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.921 [2024-07-15 16:18:34.469450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:e4e3e3e3 cdw11:e3e3e3e3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.921 [2024-07-15 16:18:34.469464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.921 [2024-07-15 16:18:34.469515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.922 [2024-07-15 16:18:34.469535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.180 #41 NEW cov: 12158 ft: 15513 corp: 28/647b lim: 40 exec/s: 41 rss: 73Mb L: 25/40 MS: 1 ChangeBinInt- 00:07:49.180 [2024-07-15 16:18:34.519402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a00004c cdw11:219d2f08 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.180 [2024-07-15 16:18:34.519429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.180 [2024-07-15 16:18:34.519485] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:a7270000 cdw11:09000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.180 [2024-07-15 16:18:34.519500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.180 #42 NEW cov: 12158 ft: 15534 corp: 29/664b lim: 40 exec/s: 42 rss: 73Mb L: 17/40 MS: 1 ShuffleBytes- 00:07:49.180 [2024-07-15 16:18:34.559783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0affff1c cdw11:1c1c2f1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.180 [2024-07-15 16:18:34.559807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.180 [2024-07-15 16:18:34.559862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.180 [2024-07-15 16:18:34.559877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.180 [2024-07-15 16:18:34.559929] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:1c1c4343 cdw11:43434343 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.180 [2024-07-15 16:18:34.559943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.180 [2024-07-15 16:18:34.559996] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:43431c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.180 [2024-07-15 16:18:34.560009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:49.180 #43 NEW cov: 12158 ft: 15600 corp: 30/701b lim: 40 exec/s: 43 rss: 73Mb L: 37/40 MS: 1 InsertByte- 00:07:49.180 [2024-07-15 16:18:34.609768] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0affff1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.180 [2024-07-15 16:18:34.609792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.180 [2024-07-15 16:18:34.609845] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1cb2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.180 [2024-07-15 16:18:34.609860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.180 [2024-07-15 16:18:34.609913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.180 [2024-07-15 16:18:34.609927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.180 #44 NEW cov: 12158 ft: 15609 corp: 31/729b lim: 40 exec/s: 44 rss: 73Mb L: 28/40 MS: 1 ChangeByte- 00:07:49.180 [2024-07-15 16:18:34.649723] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a00004c cdw11:219d2f08 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.180 [2024-07-15 16:18:34.649748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.180 [2024-07-15 16:18:34.649802] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:a7270000 cdw11:09000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.180 [2024-07-15 16:18:34.649816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.180 #45 NEW cov: 12158 ft: 15620 corp: 32/746b lim: 40 exec/s: 45 rss: 74Mb L: 17/40 MS: 1 ShuffleBytes- 00:07:49.180 [2024-07-15 16:18:34.700187] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:090000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.180 [2024-07-15 16:18:34.700215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.181 [2024-07-15 16:18:34.700270] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.181 [2024-07-15 16:18:34.700285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.181 [2024-07-15 16:18:34.700339] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffff60ff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.181 [2024-07-15 16:18:34.700354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.181 [2024-07-15 16:18:34.700406] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffff00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.181 [2024-07-15 16:18:34.700420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:49.181 #46 NEW cov: 12158 ft: 15630 corp: 33/779b lim: 40 exec/s: 46 rss: 74Mb L: 33/40 MS: 1 InsertByte- 00:07:49.181 [2024-07-15 16:18:34.750006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a010000 cdw11:01000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.181 [2024-07-15 16:18:34.750030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.181 [2024-07-15 16:18:34.750084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.181 [2024-07-15 16:18:34.750099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.439 #47 NEW cov: 12158 ft: 15649 corp: 34/796b lim: 40 exec/s: 47 rss: 74Mb L: 17/40 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:07:49.439 [2024-07-15 16:18:34.800453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0affff1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.439 [2024-07-15 16:18:34.800485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.439 [2024-07-15 16:18:34.800546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c3a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.439 [2024-07-15 16:18:34.800560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.439 [2024-07-15 16:18:34.800612] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:1c434343 cdw11:43434343 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.439 [2024-07-15 16:18:34.800627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.439 [2024-07-15 16:18:34.800681] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:431c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.439 [2024-07-15 16:18:34.800694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:49.439 #48 NEW cov: 12158 ft: 15656 corp: 35/832b lim: 40 exec/s: 48 rss: 74Mb L: 36/40 MS: 1 ChangeByte- 00:07:49.439 [2024-07-15 16:18:34.840105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00010100 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.439 [2024-07-15 16:18:34.840130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.439 #49 NEW cov: 12158 ft: 15671 corp: 36/847b lim: 40 exec/s: 49 rss: 74Mb L: 15/40 MS: 1 CopyPart- 00:07:49.439 [2024-07-15 16:18:34.890374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a00004c cdw11:219d2308 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.439 [2024-07-15 16:18:34.890412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.439 [2024-07-15 16:18:34.890466] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:a7270000 cdw11:09000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.439 [2024-07-15 16:18:34.890480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.439 #50 NEW cov: 12158 ft: 15708 corp: 37/864b lim: 40 exec/s: 50 rss: 74Mb L: 17/40 MS: 1 ChangeByte- 00:07:49.439 [2024-07-15 16:18:34.940403] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a00a727 cdw11:00000900 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.439 [2024-07-15 16:18:34.940428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.439 #51 NEW cov: 12158 ft: 15714 corp: 38/875b lim: 40 exec/s: 51 rss: 74Mb L: 11/40 MS: 1 EraseBytes- 00:07:49.439 [2024-07-15 16:18:34.980827] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0affff1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.439 [2024-07-15 16:18:34.980852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.439 [2024-07-15 16:18:34.980905] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:1c1c1c1c cdw11:1c1cf51c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.439 [2024-07-15 16:18:34.980918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.439 [2024-07-15 16:18:34.980974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.439 [2024-07-15 16:18:34.980988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.439 #52 NEW cov: 12158 ft: 15732 corp: 39/903b lim: 40 exec/s: 26 rss: 74Mb L: 28/40 MS: 1 ChangeByte- 00:07:49.439 #52 DONE cov: 12158 ft: 15732 corp: 39/903b lim: 40 exec/s: 26 rss: 74Mb 00:07:49.439 ###### Recommended dictionary. ###### 00:07:49.439 "\377\377" # Uses: 0 00:07:49.439 "\001\000\000\000\000\000\000\000" # Uses: 2 00:07:49.439 "L!\235/\010\247'\000" # Uses: 0 00:07:49.439 "\001\000\000\004" # Uses: 0 00:07:49.439 ###### End of recommended dictionary. ###### 00:07:49.439 Done 52 runs in 2 second(s) 00:07:49.697 16:18:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_12.conf /var/tmp/suppress_nvmf_fuzz 00:07:49.697 16:18:35 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:49.697 16:18:35 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:49.697 16:18:35 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 13 1 0x1 00:07:49.697 16:18:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=13 00:07:49.697 16:18:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:49.697 16:18:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:49.697 16:18:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:07:49.697 16:18:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_13.conf 00:07:49.697 16:18:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:49.697 16:18:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:49.697 16:18:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 13 00:07:49.697 16:18:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4413 00:07:49.697 16:18:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:07:49.697 16:18:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' 00:07:49.697 16:18:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4413"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:49.697 16:18:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:49.697 16:18:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:49.697 16:18:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' -c /tmp/fuzz_json_13.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 -Z 13 00:07:49.697 [2024-07-15 16:18:35.199177] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:07:49.697 [2024-07-15 16:18:35.199247] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1519358 ] 00:07:49.697 EAL: No free 2048 kB hugepages reported on node 1 00:07:49.955 [2024-07-15 16:18:35.405971] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.955 [2024-07-15 16:18:35.479773] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.212 [2024-07-15 16:18:35.539491] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:50.212 [2024-07-15 16:18:35.555698] tcp.c: 993:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4413 *** 00:07:50.212 INFO: Running with entropic power schedule (0xFF, 100). 00:07:50.212 INFO: Seed: 2838791481 00:07:50.212 INFO: Loaded 1 modules (357850 inline 8-bit counters): 357850 [0x29ab30c, 0x2a028e6), 00:07:50.212 INFO: Loaded 1 PC tables (357850 PCs): 357850 [0x2a028e8,0x2f78688), 00:07:50.212 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:07:50.212 INFO: A corpus is not provided, starting from an empty corpus 00:07:50.212 #2 INITED exec/s: 0 rss: 65Mb 00:07:50.212 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:50.212 This may also happen if the target rejected all inputs we tried so far 00:07:50.212 [2024-07-15 16:18:35.605154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3d0a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.212 [2024-07-15 16:18:35.605184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.212 [2024-07-15 16:18:35.605242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.212 [2024-07-15 16:18:35.605255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.213 [2024-07-15 16:18:35.605309] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.213 [2024-07-15 16:18:35.605323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.471 NEW_FUNC[1/696]: 0x496390 in fuzz_admin_directive_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:257 00:07:50.471 NEW_FUNC[2/696]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:50.471 #4 NEW cov: 11885 ft: 11902 corp: 2/29b lim: 40 exec/s: 0 rss: 72Mb L: 28/28 MS: 2 InsertByte-InsertRepeatedBytes- 00:07:50.471 [2024-07-15 16:18:35.925895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3d0a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.471 [2024-07-15 16:18:35.925938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.471 [2024-07-15 16:18:35.925998] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.471 [2024-07-15 16:18:35.926013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.471 NEW_FUNC[1/1]: 0xf6d060 in spdk_ring_dequeue /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/env.c:416 00:07:50.471 #5 NEW cov: 12032 ft: 12731 corp: 3/50b lim: 40 exec/s: 0 rss: 72Mb L: 21/28 MS: 1 EraseBytes- 00:07:50.471 [2024-07-15 16:18:35.986150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3dffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.471 [2024-07-15 16:18:35.986178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.471 [2024-07-15 16:18:35.986234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.471 [2024-07-15 16:18:35.986249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.472 [2024-07-15 16:18:35.986307] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.472 [2024-07-15 16:18:35.986321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.472 #6 NEW cov: 12038 ft: 12915 corp: 4/78b lim: 40 exec/s: 0 rss: 72Mb L: 28/28 MS: 1 InsertRepeatedBytes- 00:07:50.472 [2024-07-15 16:18:36.036110] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3d0a0007 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.472 [2024-07-15 16:18:36.036136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.472 [2024-07-15 16:18:36.036197] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.472 [2024-07-15 16:18:36.036211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.731 #7 NEW cov: 12123 ft: 13284 corp: 5/99b lim: 40 exec/s: 0 rss: 72Mb L: 21/28 MS: 1 ChangeBinInt- 00:07:50.731 [2024-07-15 16:18:36.076336] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3d0a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.731 [2024-07-15 16:18:36.076362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.731 [2024-07-15 16:18:36.076423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.731 [2024-07-15 16:18:36.076438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.731 [2024-07-15 16:18:36.076495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.731 [2024-07-15 16:18:36.076510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.731 #8 NEW cov: 12123 ft: 13483 corp: 6/127b lim: 40 exec/s: 0 rss: 72Mb L: 28/28 MS: 1 CrossOver- 00:07:50.731 [2024-07-15 16:18:36.116353] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3dffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.731 [2024-07-15 16:18:36.116379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.731 [2024-07-15 16:18:36.116441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.731 [2024-07-15 16:18:36.116456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.731 #9 NEW cov: 12123 ft: 13616 corp: 7/147b lim: 40 exec/s: 0 rss: 72Mb L: 20/28 MS: 1 EraseBytes- 00:07:50.731 [2024-07-15 16:18:36.166503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3d0a0007 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.731 [2024-07-15 16:18:36.166533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.731 [2024-07-15 16:18:36.166593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00150000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.731 [2024-07-15 16:18:36.166607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.731 #10 NEW cov: 12123 ft: 13674 corp: 8/168b lim: 40 exec/s: 0 rss: 72Mb L: 21/28 MS: 1 ChangeBinInt- 00:07:50.731 [2024-07-15 16:18:36.216539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000040 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.731 [2024-07-15 16:18:36.216565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.731 #12 NEW cov: 12123 ft: 14089 corp: 9/177b lim: 40 exec/s: 0 rss: 72Mb L: 9/28 MS: 2 InsertByte-CrossOver- 00:07:50.732 [2024-07-15 16:18:36.256742] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3dffffff cdw11:ffff0a00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.732 [2024-07-15 16:18:36.256768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.732 [2024-07-15 16:18:36.256828] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.732 [2024-07-15 16:18:36.256842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.732 #13 NEW cov: 12123 ft: 14155 corp: 10/195b lim: 40 exec/s: 0 rss: 72Mb L: 18/28 MS: 1 EraseBytes- 00:07:50.732 [2024-07-15 16:18:36.307115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3dffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.732 [2024-07-15 16:18:36.307141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.732 [2024-07-15 16:18:36.307202] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:0a000000 cdw11:00001600 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.732 [2024-07-15 16:18:36.307217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.732 [2024-07-15 16:18:36.307278] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.732 [2024-07-15 16:18:36.307293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.991 #14 NEW cov: 12123 ft: 14193 corp: 11/224b lim: 40 exec/s: 0 rss: 72Mb L: 29/29 MS: 1 InsertByte- 00:07:50.991 [2024-07-15 16:18:36.346982] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3dff5bff cdw11:ffff0a00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.991 [2024-07-15 16:18:36.347008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.991 [2024-07-15 16:18:36.347069] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.991 [2024-07-15 16:18:36.347084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.991 #15 NEW cov: 12123 ft: 14213 corp: 12/242b lim: 40 exec/s: 0 rss: 72Mb L: 18/29 MS: 1 ChangeByte- 00:07:50.991 [2024-07-15 16:18:36.397004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000040 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.991 [2024-07-15 16:18:36.397030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.991 #16 NEW cov: 12123 ft: 14243 corp: 13/251b lim: 40 exec/s: 0 rss: 73Mb L: 9/29 MS: 1 ShuffleBytes- 00:07:50.991 [2024-07-15 16:18:36.447173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000040 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.991 [2024-07-15 16:18:36.447198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.991 #22 NEW cov: 12123 ft: 14302 corp: 14/260b lim: 40 exec/s: 0 rss: 73Mb L: 9/29 MS: 1 ShuffleBytes- 00:07:50.991 [2024-07-15 16:18:36.487593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3d0a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.991 [2024-07-15 16:18:36.487619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.991 [2024-07-15 16:18:36.487678] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.991 [2024-07-15 16:18:36.487693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.991 [2024-07-15 16:18:36.487752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:b4000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.991 [2024-07-15 16:18:36.487768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.991 NEW_FUNC[1/1]: 0x1a7eaf0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:50.991 #23 NEW cov: 12146 ft: 14340 corp: 15/289b lim: 40 exec/s: 0 rss: 73Mb L: 29/29 MS: 1 InsertByte- 00:07:50.991 [2024-07-15 16:18:36.527703] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3d000a00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.991 [2024-07-15 16:18:36.527730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.991 [2024-07-15 16:18:36.527791] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.991 [2024-07-15 16:18:36.527807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.991 [2024-07-15 16:18:36.527868] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.991 [2024-07-15 16:18:36.527885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.991 #24 NEW cov: 12146 ft: 14363 corp: 16/317b lim: 40 exec/s: 0 rss: 73Mb L: 28/29 MS: 1 ShuffleBytes- 00:07:51.250 [2024-07-15 16:18:36.577857] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3d0a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.250 [2024-07-15 16:18:36.577884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.250 [2024-07-15 16:18:36.577946] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:ffffff00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.250 [2024-07-15 16:18:36.577960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.250 [2024-07-15 16:18:36.578019] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.250 [2024-07-15 16:18:36.578035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.250 #25 NEW cov: 12146 ft: 14398 corp: 17/348b lim: 40 exec/s: 25 rss: 73Mb L: 31/31 MS: 1 InsertRepeatedBytes- 00:07:51.250 [2024-07-15 16:18:36.617914] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3d000a00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.250 [2024-07-15 16:18:36.617940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.250 [2024-07-15 16:18:36.618003] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000023 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.250 [2024-07-15 16:18:36.618017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.250 [2024-07-15 16:18:36.618077] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.250 [2024-07-15 16:18:36.618092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.250 #26 NEW cov: 12146 ft: 14447 corp: 18/376b lim: 40 exec/s: 26 rss: 73Mb L: 28/31 MS: 1 ChangeByte- 00:07:51.250 [2024-07-15 16:18:36.668051] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3d0a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.250 [2024-07-15 16:18:36.668078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.250 [2024-07-15 16:18:36.668138] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:ffffff1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.250 [2024-07-15 16:18:36.668153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.250 [2024-07-15 16:18:36.668210] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.250 [2024-07-15 16:18:36.668225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.250 #27 NEW cov: 12146 ft: 14458 corp: 19/407b lim: 40 exec/s: 27 rss: 73Mb L: 31/31 MS: 1 ChangeBinInt- 00:07:51.250 [2024-07-15 16:18:36.718108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3d0a0007 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.250 [2024-07-15 16:18:36.718135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.250 [2024-07-15 16:18:36.718196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:000000f7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.250 [2024-07-15 16:18:36.718211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.250 #28 NEW cov: 12146 ft: 14477 corp: 20/428b lim: 40 exec/s: 28 rss: 73Mb L: 21/31 MS: 1 ChangeBinInt- 00:07:51.250 [2024-07-15 16:18:36.758166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3dff5bff cdw11:ffff0a00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.250 [2024-07-15 16:18:36.758197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.250 [2024-07-15 16:18:36.758260] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.250 [2024-07-15 16:18:36.758274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.250 #29 NEW cov: 12146 ft: 14485 corp: 21/446b lim: 40 exec/s: 29 rss: 73Mb L: 18/31 MS: 1 ChangeBit- 00:07:51.250 [2024-07-15 16:18:36.808456] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3d0a0007 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.251 [2024-07-15 16:18:36.808484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.251 [2024-07-15 16:18:36.808555] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000028 cdw11:28282828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.251 [2024-07-15 16:18:36.808570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.251 [2024-07-15 16:18:36.808628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:28282828 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.251 [2024-07-15 16:18:36.808643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.510 #30 NEW cov: 12146 ft: 14502 corp: 22/476b lim: 40 exec/s: 30 rss: 73Mb L: 30/31 MS: 1 InsertRepeatedBytes- 00:07:51.510 [2024-07-15 16:18:36.848579] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3d0a0000 cdw11:00004000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.510 [2024-07-15 16:18:36.848605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.510 [2024-07-15 16:18:36.848666] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:ffffff00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.510 [2024-07-15 16:18:36.848681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.510 [2024-07-15 16:18:36.848741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.510 [2024-07-15 16:18:36.848756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.510 #31 NEW cov: 12146 ft: 14536 corp: 23/507b lim: 40 exec/s: 31 rss: 73Mb L: 31/31 MS: 1 ChangeBit- 00:07:51.510 [2024-07-15 16:18:36.888668] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3dffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.510 [2024-07-15 16:18:36.888695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.510 [2024-07-15 16:18:36.888754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.510 [2024-07-15 16:18:36.888768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.510 [2024-07-15 16:18:36.888825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.510 [2024-07-15 16:18:36.888841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.510 #32 NEW cov: 12146 ft: 14566 corp: 24/535b lim: 40 exec/s: 32 rss: 73Mb L: 28/31 MS: 1 CrossOver- 00:07:51.510 [2024-07-15 16:18:36.928809] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3d000a00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.510 [2024-07-15 16:18:36.928834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.510 [2024-07-15 16:18:36.928892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.510 [2024-07-15 16:18:36.928905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.510 [2024-07-15 16:18:36.928960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.510 [2024-07-15 16:18:36.928976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.510 #33 NEW cov: 12146 ft: 14574 corp: 25/563b lim: 40 exec/s: 33 rss: 73Mb L: 28/31 MS: 1 ShuffleBytes- 00:07:51.510 [2024-07-15 16:18:36.968806] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3d0a0007 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.510 [2024-07-15 16:18:36.968832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.510 [2024-07-15 16:18:36.968893] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000009 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.510 [2024-07-15 16:18:36.968907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.510 #34 NEW cov: 12146 ft: 14584 corp: 26/584b lim: 40 exec/s: 34 rss: 73Mb L: 21/31 MS: 1 ChangeBinInt- 00:07:51.510 [2024-07-15 16:18:37.019086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3dffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.511 [2024-07-15 16:18:37.019113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.511 [2024-07-15 16:18:37.019176] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.511 [2024-07-15 16:18:37.019190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.511 [2024-07-15 16:18:37.019250] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:000000ff cdw11:ffff0a00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.511 [2024-07-15 16:18:37.019264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.511 #35 NEW cov: 12146 ft: 14588 corp: 27/612b lim: 40 exec/s: 35 rss: 73Mb L: 28/31 MS: 1 CopyPart- 00:07:51.511 [2024-07-15 16:18:37.068985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000040 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.511 [2024-07-15 16:18:37.069010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.770 #36 NEW cov: 12146 ft: 14593 corp: 28/621b lim: 40 exec/s: 36 rss: 73Mb L: 9/31 MS: 1 ShuffleBytes- 00:07:51.770 [2024-07-15 16:18:37.109474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3dffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.770 [2024-07-15 16:18:37.109499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.770 [2024-07-15 16:18:37.109567] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:0a000000 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.770 [2024-07-15 16:18:37.109585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.770 [2024-07-15 16:18:37.109645] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.770 [2024-07-15 16:18:37.109659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.770 [2024-07-15 16:18:37.109717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.770 [2024-07-15 16:18:37.109731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:51.770 #37 NEW cov: 12146 ft: 15051 corp: 29/655b lim: 40 exec/s: 37 rss: 73Mb L: 34/34 MS: 1 CopyPart- 00:07:51.770 [2024-07-15 16:18:37.149408] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3d0a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.770 [2024-07-15 16:18:37.149433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.770 [2024-07-15 16:18:37.149494] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.770 [2024-07-15 16:18:37.149509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.770 [2024-07-15 16:18:37.149575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.770 [2024-07-15 16:18:37.149590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.770 #38 NEW cov: 12146 ft: 15057 corp: 30/683b lim: 40 exec/s: 38 rss: 73Mb L: 28/34 MS: 1 ShuffleBytes- 00:07:51.770 [2024-07-15 16:18:37.189818] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3d0a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.770 [2024-07-15 16:18:37.189843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.770 [2024-07-15 16:18:37.189903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:ffff2e2e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.770 [2024-07-15 16:18:37.189917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.770 [2024-07-15 16:18:37.189978] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:2e2e2e2e cdw11:2e2e2eff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.770 [2024-07-15 16:18:37.189992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.770 [2024-07-15 16:18:37.190049] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:1f000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.770 [2024-07-15 16:18:37.190063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:51.770 [2024-07-15 16:18:37.190120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.770 [2024-07-15 16:18:37.190135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:51.770 #44 NEW cov: 12146 ft: 15098 corp: 31/723b lim: 40 exec/s: 44 rss: 73Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:07:51.770 [2024-07-15 16:18:37.239472] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3d0a0007 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.770 [2024-07-15 16:18:37.239500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.770 #45 NEW cov: 12146 ft: 15110 corp: 32/736b lim: 40 exec/s: 45 rss: 74Mb L: 13/40 MS: 1 EraseBytes- 00:07:51.770 [2024-07-15 16:18:37.289979] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3dffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.770 [2024-07-15 16:18:37.290004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.770 [2024-07-15 16:18:37.290065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:0a000000 cdw11:00001605 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.770 [2024-07-15 16:18:37.290080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.770 [2024-07-15 16:18:37.290142] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:c11e020a cdw11:a7270000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.770 [2024-07-15 16:18:37.290157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.770 [2024-07-15 16:18:37.290216] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.770 [2024-07-15 16:18:37.290230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:51.770 #46 NEW cov: 12146 ft: 15116 corp: 33/773b lim: 40 exec/s: 46 rss: 74Mb L: 37/40 MS: 1 CMP- DE: "\005\301\036\002\012\247'\000"- 00:07:51.770 [2024-07-15 16:18:37.340030] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3dffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.770 [2024-07-15 16:18:37.340055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.770 [2024-07-15 16:18:37.340115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.770 [2024-07-15 16:18:37.340130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.770 [2024-07-15 16:18:37.340189] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:000000ff cdw11:ffff0b00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.770 [2024-07-15 16:18:37.340204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.029 #47 NEW cov: 12146 ft: 15128 corp: 34/801b lim: 40 exec/s: 47 rss: 74Mb L: 28/40 MS: 1 ChangeBit- 00:07:52.029 [2024-07-15 16:18:37.390004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3d0a0007 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.029 [2024-07-15 16:18:37.390030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.029 [2024-07-15 16:18:37.390091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:27a70a10 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.029 [2024-07-15 16:18:37.390106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.029 #48 NEW cov: 12146 ft: 15132 corp: 35/822b lim: 40 exec/s: 48 rss: 74Mb L: 21/40 MS: 1 CMP- DE: "\000'\247\012\020A\224\340"- 00:07:52.029 [2024-07-15 16:18:37.430247] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3d0a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.030 [2024-07-15 16:18:37.430274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.030 [2024-07-15 16:18:37.430335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.030 [2024-07-15 16:18:37.430349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.030 [2024-07-15 16:18:37.430407] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000080 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.030 [2024-07-15 16:18:37.430422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.030 #49 NEW cov: 12146 ft: 15166 corp: 36/850b lim: 40 exec/s: 49 rss: 74Mb L: 28/40 MS: 1 ChangeBit- 00:07:52.030 [2024-07-15 16:18:37.470349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3d0a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.030 [2024-07-15 16:18:37.470374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.030 [2024-07-15 16:18:37.470435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.030 [2024-07-15 16:18:37.470449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.030 [2024-07-15 16:18:37.470508] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.030 [2024-07-15 16:18:37.470524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.030 #50 NEW cov: 12146 ft: 15231 corp: 37/878b lim: 40 exec/s: 50 rss: 74Mb L: 28/40 MS: 1 ShuffleBytes- 00:07:52.030 [2024-07-15 16:18:37.510201] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000040 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.030 [2024-07-15 16:18:37.510226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.030 #51 NEW cov: 12146 ft: 15286 corp: 38/887b lim: 40 exec/s: 51 rss: 74Mb L: 9/40 MS: 1 CrossOver- 00:07:52.030 [2024-07-15 16:18:37.560881] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3d0a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.030 [2024-07-15 16:18:37.560909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.030 [2024-07-15 16:18:37.560969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:04000000 cdw11:ffff2e2e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.030 [2024-07-15 16:18:37.560984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.030 [2024-07-15 16:18:37.561045] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:2e2e2e2e cdw11:2e2e2eff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.030 [2024-07-15 16:18:37.561061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.030 [2024-07-15 16:18:37.561119] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:1f000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.030 [2024-07-15 16:18:37.561136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:52.030 [2024-07-15 16:18:37.561193] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.030 [2024-07-15 16:18:37.561213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:52.030 #52 NEW cov: 12146 ft: 15347 corp: 39/927b lim: 40 exec/s: 26 rss: 74Mb L: 40/40 MS: 1 ChangeBit- 00:07:52.030 #52 DONE cov: 12146 ft: 15347 corp: 39/927b lim: 40 exec/s: 26 rss: 74Mb 00:07:52.030 ###### Recommended dictionary. ###### 00:07:52.030 "\005\301\036\002\012\247'\000" # Uses: 0 00:07:52.030 "\000'\247\012\020A\224\340" # Uses: 0 00:07:52.030 ###### End of recommended dictionary. ###### 00:07:52.030 Done 52 runs in 2 second(s) 00:07:52.289 16:18:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_13.conf /var/tmp/suppress_nvmf_fuzz 00:07:52.289 16:18:37 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:52.289 16:18:37 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:52.289 16:18:37 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 14 1 0x1 00:07:52.289 16:18:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=14 00:07:52.289 16:18:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:52.289 16:18:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:52.289 16:18:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:52.289 16:18:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_14.conf 00:07:52.289 16:18:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:52.289 16:18:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:52.289 16:18:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 14 00:07:52.289 16:18:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4414 00:07:52.289 16:18:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:52.289 16:18:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' 00:07:52.289 16:18:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4414"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:52.289 16:18:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:52.289 16:18:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:52.289 16:18:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' -c /tmp/fuzz_json_14.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 -Z 14 00:07:52.289 [2024-07-15 16:18:37.779929] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:07:52.289 [2024-07-15 16:18:37.780001] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1519720 ] 00:07:52.289 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.548 [2024-07-15 16:18:37.981306] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.548 [2024-07-15 16:18:38.054109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.548 [2024-07-15 16:18:38.114015] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:52.807 [2024-07-15 16:18:38.130232] tcp.c: 993:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4414 *** 00:07:52.807 INFO: Running with entropic power schedule (0xFF, 100). 00:07:52.807 INFO: Seed: 1117804132 00:07:52.807 INFO: Loaded 1 modules (357850 inline 8-bit counters): 357850 [0x29ab30c, 0x2a028e6), 00:07:52.807 INFO: Loaded 1 PC tables (357850 PCs): 357850 [0x2a028e8,0x2f78688), 00:07:52.807 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:52.807 INFO: A corpus is not provided, starting from an empty corpus 00:07:52.807 #2 INITED exec/s: 0 rss: 65Mb 00:07:52.807 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:52.807 This may also happen if the target rejected all inputs we tried so far 00:07:52.807 [2024-07-15 16:18:38.179372] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000002e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.807 [2024-07-15 16:18:38.179403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.807 [2024-07-15 16:18:38.179462] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000002e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.807 [2024-07-15 16:18:38.179477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.066 NEW_FUNC[1/700]: 0x497f50 in fuzz_admin_set_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:392 00:07:53.066 NEW_FUNC[2/700]: 0x4b9410 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:07:53.066 #13 NEW cov: 11929 ft: 11902 corp: 2/25b lim: 35 exec/s: 0 rss: 72Mb L: 24/24 MS: 1 InsertRepeatedBytes- 00:07:53.066 [2024-07-15 16:18:38.510215] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000002e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.066 [2024-07-15 16:18:38.510256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.066 [2024-07-15 16:18:38.510315] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000002e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.066 [2024-07-15 16:18:38.510330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.066 #14 NEW cov: 12059 ft: 12504 corp: 3/49b lim: 35 exec/s: 0 rss: 72Mb L: 24/24 MS: 1 ShuffleBytes- 00:07:53.066 [2024-07-15 16:18:38.560399] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000003c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.066 [2024-07-15 16:18:38.560428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.066 [2024-07-15 16:18:38.560486] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000003c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.066 [2024-07-15 16:18:38.560502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.066 [2024-07-15 16:18:38.560566] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:0000003c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.066 [2024-07-15 16:18:38.560581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:53.066 #17 NEW cov: 12065 ft: 12936 corp: 4/82b lim: 35 exec/s: 0 rss: 72Mb L: 33/33 MS: 3 CrossOver-ChangeByte-InsertRepeatedBytes- 00:07:53.066 [2024-07-15 16:18:38.600369] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000002e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.066 [2024-07-15 16:18:38.600397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.066 [2024-07-15 16:18:38.600456] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:8000002e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.066 [2024-07-15 16:18:38.600472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.066 #18 NEW cov: 12157 ft: 13236 corp: 5/106b lim: 35 exec/s: 0 rss: 72Mb L: 24/33 MS: 1 ChangeBinInt- 00:07:53.066 [2024-07-15 16:18:38.640513] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000002e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.066 [2024-07-15 16:18:38.640547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.066 [2024-07-15 16:18:38.640610] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000002e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.066 [2024-07-15 16:18:38.640624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.325 #19 NEW cov: 12157 ft: 13392 corp: 6/130b lim: 35 exec/s: 0 rss: 72Mb L: 24/33 MS: 1 ChangeByte- 00:07:53.325 [2024-07-15 16:18:38.680583] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000002e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.325 [2024-07-15 16:18:38.680610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.325 [2024-07-15 16:18:38.680670] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000002e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.325 [2024-07-15 16:18:38.680685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.325 #20 NEW cov: 12157 ft: 13493 corp: 7/154b lim: 35 exec/s: 0 rss: 72Mb L: 24/33 MS: 1 ChangeBinInt- 00:07:53.325 [2024-07-15 16:18:38.720704] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000bd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.325 [2024-07-15 16:18:38.720732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.325 [2024-07-15 16:18:38.720792] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000bd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.325 [2024-07-15 16:18:38.720809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.325 [2024-07-15 16:18:38.720868] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000bd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.325 [2024-07-15 16:18:38.720884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.325 #22 NEW cov: 12157 ft: 13613 corp: 8/181b lim: 35 exec/s: 0 rss: 72Mb L: 27/33 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:07:53.325 [2024-07-15 16:18:38.760847] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000002e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.325 [2024-07-15 16:18:38.760873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.325 [2024-07-15 16:18:38.760935] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000002e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.325 [2024-07-15 16:18:38.760949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.325 #23 NEW cov: 12157 ft: 13697 corp: 9/206b lim: 35 exec/s: 0 rss: 72Mb L: 25/33 MS: 1 InsertByte- 00:07:53.325 [2024-07-15 16:18:38.810974] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000002e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.325 [2024-07-15 16:18:38.811000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.325 [2024-07-15 16:18:38.811061] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000002e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.325 [2024-07-15 16:18:38.811075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.325 #24 NEW cov: 12157 ft: 13814 corp: 10/231b lim: 35 exec/s: 0 rss: 72Mb L: 25/33 MS: 1 ChangeBinInt- 00:07:53.325 [2024-07-15 16:18:38.861037] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000bd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.325 [2024-07-15 16:18:38.861068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.325 [2024-07-15 16:18:38.861129] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000bd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.325 [2024-07-15 16:18:38.861146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.325 [2024-07-15 16:18:38.861202] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000bd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.325 [2024-07-15 16:18:38.861219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.325 #25 NEW cov: 12157 ft: 13883 corp: 11/258b lim: 35 exec/s: 0 rss: 73Mb L: 27/33 MS: 1 ShuffleBytes- 00:07:53.584 [2024-07-15 16:18:38.911216] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000bd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.584 [2024-07-15 16:18:38.911244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.584 [2024-07-15 16:18:38.911303] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000bd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.584 [2024-07-15 16:18:38.911321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.584 [2024-07-15 16:18:38.911379] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000bd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.584 [2024-07-15 16:18:38.911397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.584 #26 NEW cov: 12157 ft: 13903 corp: 12/285b lim: 35 exec/s: 0 rss: 73Mb L: 27/33 MS: 1 CopyPart- 00:07:53.584 [2024-07-15 16:18:38.961359] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000bd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.584 [2024-07-15 16:18:38.961387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.584 [2024-07-15 16:18:38.961446] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000bd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.584 [2024-07-15 16:18:38.961462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.584 [2024-07-15 16:18:38.961519] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000bd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.584 [2024-07-15 16:18:38.961542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.584 #27 NEW cov: 12157 ft: 13951 corp: 13/312b lim: 35 exec/s: 0 rss: 73Mb L: 27/33 MS: 1 CopyPart- 00:07:53.584 [2024-07-15 16:18:39.001482] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000002e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.584 [2024-07-15 16:18:39.001508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.584 [2024-07-15 16:18:39.001575] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000002e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.584 [2024-07-15 16:18:39.001590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.584 #28 NEW cov: 12157 ft: 13982 corp: 14/338b lim: 35 exec/s: 0 rss: 73Mb L: 26/33 MS: 1 InsertByte- 00:07:53.584 [2024-07-15 16:18:39.051786] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000002e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.584 [2024-07-15 16:18:39.051812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.584 [2024-07-15 16:18:39.051877] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000002e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.584 [2024-07-15 16:18:39.051892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.584 [2024-07-15 16:18:39.051952] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:0000002e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.584 [2024-07-15 16:18:39.051967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:53.585 NEW_FUNC[1/1]: 0x1a7eaf0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:53.585 #29 NEW cov: 12180 ft: 14031 corp: 15/372b lim: 35 exec/s: 0 rss: 73Mb L: 34/34 MS: 1 CopyPart- 00:07:53.585 [2024-07-15 16:18:39.101763] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000bd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.585 [2024-07-15 16:18:39.101791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.585 [2024-07-15 16:18:39.101851] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000bd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.585 [2024-07-15 16:18:39.101868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.585 [2024-07-15 16:18:39.101924] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000bd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.585 [2024-07-15 16:18:39.101942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.585 #30 NEW cov: 12180 ft: 14070 corp: 16/399b lim: 35 exec/s: 0 rss: 73Mb L: 27/34 MS: 1 ShuffleBytes- 00:07:53.585 [2024-07-15 16:18:39.151734] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000002e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.585 [2024-07-15 16:18:39.151760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.585 [2024-07-15 16:18:39.151819] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000002e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.585 [2024-07-15 16:18:39.151834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.843 #33 NEW cov: 12180 ft: 14187 corp: 17/416b lim: 35 exec/s: 33 rss: 73Mb L: 17/34 MS: 3 ShuffleBytes-InsertByte-CrossOver- 00:07:53.843 [2024-07-15 16:18:39.192022] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000002e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.843 [2024-07-15 16:18:39.192047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.843 [2024-07-15 16:18:39.192109] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000002e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.843 [2024-07-15 16:18:39.192124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.843 [2024-07-15 16:18:39.192184] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:8000002e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.843 [2024-07-15 16:18:39.192201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.843 #39 NEW cov: 12180 ft: 14231 corp: 18/442b lim: 35 exec/s: 39 rss: 73Mb L: 26/34 MS: 1 InsertRepeatedBytes- 00:07:53.843 [2024-07-15 16:18:39.242370] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:8000002e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.843 [2024-07-15 16:18:39.242398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.843 [2024-07-15 16:18:39.242462] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000002e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.843 [2024-07-15 16:18:39.242477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.843 [2024-07-15 16:18:39.242543] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:0000002e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.843 [2024-07-15 16:18:39.242558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:53.843 #40 NEW cov: 12180 ft: 14252 corp: 19/474b lim: 35 exec/s: 40 rss: 73Mb L: 32/34 MS: 1 InsertRepeatedBytes- 00:07:53.843 [2024-07-15 16:18:39.292287] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000bd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.843 [2024-07-15 16:18:39.292315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.843 [2024-07-15 16:18:39.292375] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000bd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.843 [2024-07-15 16:18:39.292392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.843 [2024-07-15 16:18:39.292451] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000bd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.843 [2024-07-15 16:18:39.292468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.843 #41 NEW cov: 12180 ft: 14270 corp: 20/501b lim: 35 exec/s: 41 rss: 73Mb L: 27/34 MS: 1 ShuffleBytes- 00:07:53.843 [2024-07-15 16:18:39.342677] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000002e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.843 [2024-07-15 16:18:39.342704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.843 [2024-07-15 16:18:39.342764] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000002e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.843 [2024-07-15 16:18:39.342778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.843 [2024-07-15 16:18:39.342836] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:0000002e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.843 [2024-07-15 16:18:39.342851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:53.843 #42 NEW cov: 12180 ft: 14283 corp: 21/535b lim: 35 exec/s: 42 rss: 73Mb L: 34/34 MS: 1 CopyPart- 00:07:53.843 [2024-07-15 16:18:39.392609] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000002e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.843 [2024-07-15 16:18:39.392637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.843 [2024-07-15 16:18:39.392700] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000002e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.843 [2024-07-15 16:18:39.392715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.102 #43 NEW cov: 12180 ft: 14312 corp: 22/559b lim: 35 exec/s: 43 rss: 73Mb L: 24/34 MS: 1 ShuffleBytes- 00:07:54.102 [2024-07-15 16:18:39.443043] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000bd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.102 [2024-07-15 16:18:39.443072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.102 [2024-07-15 16:18:39.443130] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000bd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.102 [2024-07-15 16:18:39.443149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.102 [2024-07-15 16:18:39.443208] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.102 [2024-07-15 16:18:39.443223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.102 [2024-07-15 16:18:39.443278] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.102 [2024-07-15 16:18:39.443294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.102 [2024-07-15 16:18:39.443353] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000bd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.102 [2024-07-15 16:18:39.443369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:54.102 #44 NEW cov: 12180 ft: 14516 corp: 23/594b lim: 35 exec/s: 44 rss: 73Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:07:54.102 [2024-07-15 16:18:39.482818] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000bd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.102 [2024-07-15 16:18:39.482846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.102 [2024-07-15 16:18:39.482907] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000bd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.102 [2024-07-15 16:18:39.482923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.102 [2024-07-15 16:18:39.482982] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000bd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.102 [2024-07-15 16:18:39.482999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.103 #45 NEW cov: 12180 ft: 14554 corp: 24/621b lim: 35 exec/s: 45 rss: 74Mb L: 27/35 MS: 1 ShuffleBytes- 00:07:54.103 #46 NEW cov: 12180 ft: 15203 corp: 25/629b lim: 35 exec/s: 46 rss: 74Mb L: 8/35 MS: 1 CrossOver- 00:07:54.103 [2024-07-15 16:18:39.582990] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000002e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.103 [2024-07-15 16:18:39.583019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.103 #47 NEW cov: 12180 ft: 15387 corp: 26/645b lim: 35 exec/s: 47 rss: 74Mb L: 16/35 MS: 1 EraseBytes- 00:07:54.103 [2024-07-15 16:18:39.623206] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000bd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.103 [2024-07-15 16:18:39.623235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.103 [2024-07-15 16:18:39.623298] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000bd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.103 [2024-07-15 16:18:39.623316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.103 [2024-07-15 16:18:39.623375] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000bd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.103 [2024-07-15 16:18:39.623393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.103 #48 NEW cov: 12180 ft: 15406 corp: 27/672b lim: 35 exec/s: 48 rss: 74Mb L: 27/35 MS: 1 ChangeByte- 00:07:54.103 [2024-07-15 16:18:39.673442] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000bd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.103 [2024-07-15 16:18:39.673473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.103 [2024-07-15 16:18:39.673536] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000bd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.103 [2024-07-15 16:18:39.673553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.103 [2024-07-15 16:18:39.673613] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000bd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.103 [2024-07-15 16:18:39.673630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.103 [2024-07-15 16:18:39.673687] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000bd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.103 [2024-07-15 16:18:39.673702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.362 [2024-07-15 16:18:39.713615] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000bd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.362 [2024-07-15 16:18:39.713642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.362 [2024-07-15 16:18:39.713703] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000bd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.362 [2024-07-15 16:18:39.713720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.362 [2024-07-15 16:18:39.713777] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000bd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.362 [2024-07-15 16:18:39.713793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.362 [2024-07-15 16:18:39.713852] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000bd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.362 [2024-07-15 16:18:39.713867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.362 #50 NEW cov: 12180 ft: 15429 corp: 28/702b lim: 35 exec/s: 50 rss: 74Mb L: 30/35 MS: 2 InsertByte-CMP- DE: "\200\000"- 00:07:54.362 [2024-07-15 16:18:39.753917] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000002e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.362 [2024-07-15 16:18:39.753943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.362 [2024-07-15 16:18:39.754002] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000002e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.362 [2024-07-15 16:18:39.754016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.362 [2024-07-15 16:18:39.754076] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:0000002e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.362 [2024-07-15 16:18:39.754091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.362 [2024-07-15 16:18:39.754149] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000d1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.362 [2024-07-15 16:18:39.754164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:54.362 #51 NEW cov: 12180 ft: 15459 corp: 29/737b lim: 35 exec/s: 51 rss: 74Mb L: 35/35 MS: 1 InsertByte- 00:07:54.362 [2024-07-15 16:18:39.793721] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000bd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.362 [2024-07-15 16:18:39.793750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.362 [2024-07-15 16:18:39.793812] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000bd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.362 [2024-07-15 16:18:39.793829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.362 [2024-07-15 16:18:39.793891] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000bd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.362 [2024-07-15 16:18:39.793907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.362 #52 NEW cov: 12180 ft: 15470 corp: 30/764b lim: 35 exec/s: 52 rss: 74Mb L: 27/35 MS: 1 ChangeBit- 00:07:54.362 [2024-07-15 16:18:39.844054] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000002e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.362 [2024-07-15 16:18:39.844081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.362 [2024-07-15 16:18:39.844142] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000002e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.362 [2024-07-15 16:18:39.844156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.362 [2024-07-15 16:18:39.844216] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000002e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.362 [2024-07-15 16:18:39.844230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.362 #53 NEW cov: 12180 ft: 15486 corp: 31/788b lim: 35 exec/s: 53 rss: 74Mb L: 24/35 MS: 1 CrossOver- 00:07:54.362 [2024-07-15 16:18:39.883578] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.362 [2024-07-15 16:18:39.883604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.362 #58 NEW cov: 12180 ft: 15534 corp: 32/795b lim: 35 exec/s: 58 rss: 74Mb L: 7/35 MS: 5 CrossOver-CopyPart-ChangeByte-ShuffleBytes-InsertByte- 00:07:54.362 [2024-07-15 16:18:39.924057] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000002e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.362 [2024-07-15 16:18:39.924083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.362 [2024-07-15 16:18:39.924143] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000002e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.362 [2024-07-15 16:18:39.924158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.622 #59 NEW cov: 12180 ft: 15541 corp: 33/821b lim: 35 exec/s: 59 rss: 74Mb L: 26/35 MS: 1 InsertByte- 00:07:54.622 [2024-07-15 16:18:39.964374] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000002e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.622 [2024-07-15 16:18:39.964400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.622 [2024-07-15 16:18:39.964462] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000002e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.622 [2024-07-15 16:18:39.964478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.622 [2024-07-15 16:18:39.964541] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.622 [2024-07-15 16:18:39.964558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.622 #60 NEW cov: 12180 ft: 15546 corp: 34/855b lim: 35 exec/s: 60 rss: 74Mb L: 34/35 MS: 1 ChangeBinInt- 00:07:54.622 [2024-07-15 16:18:40.014670] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000002e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.622 [2024-07-15 16:18:40.014699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.622 [2024-07-15 16:18:40.014760] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000002e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.622 [2024-07-15 16:18:40.014775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.622 [2024-07-15 16:18:40.014837] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:0000002e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.622 [2024-07-15 16:18:40.014853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.622 [2024-07-15 16:18:40.014912] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:0000002e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.622 [2024-07-15 16:18:40.014926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:54.622 #61 NEW cov: 12180 ft: 15587 corp: 35/890b lim: 35 exec/s: 61 rss: 74Mb L: 35/35 MS: 1 CrossOver- 00:07:54.622 [2024-07-15 16:18:40.054548] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.622 [2024-07-15 16:18:40.054578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.622 [2024-07-15 16:18:40.054642] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000002e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.622 [2024-07-15 16:18:40.054657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.622 [2024-07-15 16:18:40.054718] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000002e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.622 [2024-07-15 16:18:40.054732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.622 [2024-07-15 16:18:40.054792] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:0000002e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.622 [2024-07-15 16:18:40.054806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.622 #62 NEW cov: 12180 ft: 15609 corp: 36/924b lim: 35 exec/s: 62 rss: 74Mb L: 34/35 MS: 1 PersAutoDict- DE: "\200\000"- 00:07:54.622 [2024-07-15 16:18:40.094341] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000bd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.622 [2024-07-15 16:18:40.094369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.622 [2024-07-15 16:18:40.094431] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000bd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.622 [2024-07-15 16:18:40.094448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.622 #63 NEW cov: 12180 ft: 15630 corp: 37/941b lim: 35 exec/s: 63 rss: 74Mb L: 17/35 MS: 1 EraseBytes- 00:07:54.622 [2024-07-15 16:18:40.144671] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000002e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.622 [2024-07-15 16:18:40.144697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.622 [2024-07-15 16:18:40.144761] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000002e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.622 [2024-07-15 16:18:40.144776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.622 [2024-07-15 16:18:40.144835] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000002e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.622 [2024-07-15 16:18:40.144849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.622 #64 pulse cov: 12180 ft: 15636 corp: 37/941b lim: 35 exec/s: 32 rss: 74Mb 00:07:54.622 #64 NEW cov: 12180 ft: 15636 corp: 38/965b lim: 35 exec/s: 32 rss: 74Mb L: 24/35 MS: 1 ChangeBit- 00:07:54.622 #64 DONE cov: 12180 ft: 15636 corp: 38/965b lim: 35 exec/s: 32 rss: 74Mb 00:07:54.622 ###### Recommended dictionary. ###### 00:07:54.622 "\200\000" # Uses: 1 00:07:54.622 ###### End of recommended dictionary. ###### 00:07:54.622 Done 64 runs in 2 second(s) 00:07:54.881 16:18:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_14.conf /var/tmp/suppress_nvmf_fuzz 00:07:54.881 16:18:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:54.881 16:18:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:54.881 16:18:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 15 1 0x1 00:07:54.881 16:18:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=15 00:07:54.881 16:18:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:54.881 16:18:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:54.881 16:18:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:54.881 16:18:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_15.conf 00:07:54.881 16:18:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:54.881 16:18:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:54.881 16:18:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 15 00:07:54.881 16:18:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4415 00:07:54.881 16:18:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:54.881 16:18:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' 00:07:54.881 16:18:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4415"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:54.881 16:18:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:54.881 16:18:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:54.881 16:18:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' -c /tmp/fuzz_json_15.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 -Z 15 00:07:54.881 [2024-07-15 16:18:40.363879] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:07:54.881 [2024-07-15 16:18:40.363951] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1520073 ] 00:07:54.881 EAL: No free 2048 kB hugepages reported on node 1 00:07:55.141 [2024-07-15 16:18:40.569222] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.141 [2024-07-15 16:18:40.641712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.141 [2024-07-15 16:18:40.701224] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:55.141 [2024-07-15 16:18:40.717448] tcp.c: 993:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4415 *** 00:07:55.399 INFO: Running with entropic power schedule (0xFF, 100). 00:07:55.399 INFO: Seed: 3705808856 00:07:55.399 INFO: Loaded 1 modules (357850 inline 8-bit counters): 357850 [0x29ab30c, 0x2a028e6), 00:07:55.399 INFO: Loaded 1 PC tables (357850 PCs): 357850 [0x2a028e8,0x2f78688), 00:07:55.399 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:55.399 INFO: A corpus is not provided, starting from an empty corpus 00:07:55.399 #2 INITED exec/s: 0 rss: 65Mb 00:07:55.399 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:55.399 This may also happen if the target rejected all inputs we tried so far 00:07:55.399 [2024-07-15 16:18:40.772927] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.399 [2024-07-15 16:18:40.772959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.658 NEW_FUNC[1/698]: 0x499490 in fuzz_admin_get_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:460 00:07:55.658 NEW_FUNC[2/698]: 0x4b9410 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:07:55.658 #9 NEW cov: 11887 ft: 11879 corp: 2/20b lim: 35 exec/s: 0 rss: 72Mb L: 19/19 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:07:55.658 [2024-07-15 16:18:41.103653] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.658 [2024-07-15 16:18:41.103694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.658 [2024-07-15 16:18:41.103752] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.658 [2024-07-15 16:18:41.103767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.658 #14 NEW cov: 12028 ft: 12669 corp: 3/34b lim: 35 exec/s: 0 rss: 72Mb L: 14/19 MS: 5 InsertByte-EraseBytes-ShuffleBytes-ChangeByte-InsertRepeatedBytes- 00:07:55.658 [2024-07-15 16:18:41.143588] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000005a4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.658 [2024-07-15 16:18:41.143615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.658 #15 NEW cov: 12034 ft: 12920 corp: 4/41b lim: 35 exec/s: 0 rss: 72Mb L: 7/19 MS: 1 InsertRepeatedBytes- 00:07:55.658 [2024-07-15 16:18:41.183864] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.658 [2024-07-15 16:18:41.183890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.658 [2024-07-15 16:18:41.183946] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.658 [2024-07-15 16:18:41.183960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.658 #17 NEW cov: 12119 ft: 13258 corp: 5/57b lim: 35 exec/s: 0 rss: 72Mb L: 16/19 MS: 2 CrossOver-InsertRepeatedBytes- 00:07:55.658 [2024-07-15 16:18:41.223805] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000005a4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.658 [2024-07-15 16:18:41.223831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.916 #18 NEW cov: 12119 ft: 13320 corp: 6/65b lim: 35 exec/s: 0 rss: 72Mb L: 8/19 MS: 1 InsertByte- 00:07:55.916 [2024-07-15 16:18:41.273940] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000005a4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.916 [2024-07-15 16:18:41.273969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.916 #19 NEW cov: 12119 ft: 13436 corp: 7/74b lim: 35 exec/s: 0 rss: 72Mb L: 9/19 MS: 1 InsertByte- 00:07:55.916 [2024-07-15 16:18:41.324233] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.916 [2024-07-15 16:18:41.324258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.916 #20 NEW cov: 12119 ft: 13570 corp: 8/93b lim: 35 exec/s: 0 rss: 72Mb L: 19/19 MS: 1 ShuffleBytes- 00:07:55.916 [2024-07-15 16:18:41.374318] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.916 [2024-07-15 16:18:41.374343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.916 [2024-07-15 16:18:41.374402] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.916 [2024-07-15 16:18:41.374417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.916 #21 NEW cov: 12119 ft: 13580 corp: 9/108b lim: 35 exec/s: 0 rss: 73Mb L: 15/19 MS: 1 InsertByte- 00:07:55.916 [2024-07-15 16:18:41.424453] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.916 [2024-07-15 16:18:41.424480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.916 [2024-07-15 16:18:41.424538] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.916 [2024-07-15 16:18:41.424553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.916 #22 NEW cov: 12119 ft: 13592 corp: 10/124b lim: 35 exec/s: 0 rss: 73Mb L: 16/19 MS: 1 InsertByte- 00:07:55.916 [2024-07-15 16:18:41.474730] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000005a4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.916 [2024-07-15 16:18:41.474756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.916 [2024-07-15 16:18:41.474867] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.916 [2024-07-15 16:18:41.474884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.175 #23 NEW cov: 12119 ft: 13792 corp: 11/147b lim: 35 exec/s: 0 rss: 73Mb L: 23/23 MS: 1 InsertRepeatedBytes- 00:07:56.175 [2024-07-15 16:18:41.524663] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.175 [2024-07-15 16:18:41.524688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.175 #24 NEW cov: 12119 ft: 13818 corp: 12/158b lim: 35 exec/s: 0 rss: 73Mb L: 11/23 MS: 1 EraseBytes- 00:07:56.175 [2024-07-15 16:18:41.564930] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.175 [2024-07-15 16:18:41.564956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.175 #30 NEW cov: 12119 ft: 13841 corp: 13/178b lim: 35 exec/s: 0 rss: 73Mb L: 20/23 MS: 1 InsertByte- 00:07:56.176 [2024-07-15 16:18:41.605031] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.176 [2024-07-15 16:18:41.605056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.176 #31 NEW cov: 12119 ft: 13870 corp: 14/197b lim: 35 exec/s: 0 rss: 73Mb L: 19/23 MS: 1 CopyPart- 00:07:56.176 [2024-07-15 16:18:41.655119] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000072a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.176 [2024-07-15 16:18:41.655149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.176 [2024-07-15 16:18:41.655206] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.176 [2024-07-15 16:18:41.655221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.176 NEW_FUNC[1/1]: 0x1a7eaf0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:56.176 #32 NEW cov: 12142 ft: 13949 corp: 15/216b lim: 35 exec/s: 0 rss: 73Mb L: 19/23 MS: 1 ChangeBit- 00:07:56.176 [2024-07-15 16:18:41.695260] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.176 [2024-07-15 16:18:41.695286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.176 [2024-07-15 16:18:41.695342] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.176 [2024-07-15 16:18:41.695356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.176 #33 NEW cov: 12142 ft: 13995 corp: 16/235b lim: 35 exec/s: 0 rss: 73Mb L: 19/23 MS: 1 CrossOver- 00:07:56.176 [2024-07-15 16:18:41.745281] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000005a4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.176 [2024-07-15 16:18:41.745306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.434 #34 NEW cov: 12142 ft: 14004 corp: 17/243b lim: 35 exec/s: 34 rss: 73Mb L: 8/23 MS: 1 ChangeBit- 00:07:56.435 [2024-07-15 16:18:41.785531] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.435 [2024-07-15 16:18:41.785557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.435 [2024-07-15 16:18:41.785615] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.435 [2024-07-15 16:18:41.785630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.435 #35 NEW cov: 12142 ft: 14023 corp: 18/262b lim: 35 exec/s: 35 rss: 73Mb L: 19/23 MS: 1 ChangeBit- 00:07:56.435 [2024-07-15 16:18:41.835664] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.435 [2024-07-15 16:18:41.835689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.435 [2024-07-15 16:18:41.835742] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000004ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.435 [2024-07-15 16:18:41.835756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.435 #36 NEW cov: 12142 ft: 14108 corp: 19/280b lim: 35 exec/s: 36 rss: 73Mb L: 18/23 MS: 1 CrossOver- 00:07:56.435 [2024-07-15 16:18:41.885820] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.435 [2024-07-15 16:18:41.885847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.435 [2024-07-15 16:18:41.885903] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.435 [2024-07-15 16:18:41.885921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.435 #37 NEW cov: 12142 ft: 14141 corp: 20/299b lim: 35 exec/s: 37 rss: 73Mb L: 19/23 MS: 1 InsertByte- 00:07:56.435 [2024-07-15 16:18:41.936071] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.435 [2024-07-15 16:18:41.936098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.435 [2024-07-15 16:18:41.936155] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.435 [2024-07-15 16:18:41.936170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.435 [2024-07-15 16:18:41.936228] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000004ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.435 [2024-07-15 16:18:41.936242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.435 #38 NEW cov: 12142 ft: 14257 corp: 21/325b lim: 35 exec/s: 38 rss: 73Mb L: 26/26 MS: 1 CopyPart- 00:07:56.435 [2024-07-15 16:18:41.975907] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.435 [2024-07-15 16:18:41.975933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.435 #39 NEW cov: 12142 ft: 14309 corp: 22/337b lim: 35 exec/s: 39 rss: 73Mb L: 12/26 MS: 1 InsertByte- 00:07:56.694 [2024-07-15 16:18:42.016165] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.694 [2024-07-15 16:18:42.016191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.694 [2024-07-15 16:18:42.016248] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000004ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.694 [2024-07-15 16:18:42.016262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.694 #40 NEW cov: 12142 ft: 14332 corp: 23/355b lim: 35 exec/s: 40 rss: 73Mb L: 18/26 MS: 1 CopyPart- 00:07:56.694 [2024-07-15 16:18:42.056238] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000005a4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.694 [2024-07-15 16:18:42.056265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.694 [2024-07-15 16:18:42.056319] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NON OPERATIONAL POWER STATE CONFIG cid:5 cdw10:00000511 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.694 [2024-07-15 16:18:42.056333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.694 #41 NEW cov: 12142 ft: 14360 corp: 24/370b lim: 35 exec/s: 41 rss: 73Mb L: 15/26 MS: 1 CMP- DE: "\266\006\3619\021\247'\000"- 00:07:56.694 [2024-07-15 16:18:42.096542] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.694 [2024-07-15 16:18:42.096569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.694 [2024-07-15 16:18:42.096629] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.694 [2024-07-15 16:18:42.096644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.694 #42 NEW cov: 12142 ft: 14367 corp: 25/394b lim: 35 exec/s: 42 rss: 73Mb L: 24/26 MS: 1 CMP- DE: "\001\000\000\000"- 00:07:56.694 [2024-07-15 16:18:42.146490] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.694 [2024-07-15 16:18:42.146516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.694 [2024-07-15 16:18:42.146585] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.694 [2024-07-15 16:18:42.146600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.694 #43 NEW cov: 12142 ft: 14374 corp: 26/413b lim: 35 exec/s: 43 rss: 73Mb L: 19/26 MS: 1 ShuffleBytes- 00:07:56.694 [2024-07-15 16:18:42.186515] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.694 [2024-07-15 16:18:42.186546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.694 #44 NEW cov: 12142 ft: 14389 corp: 27/425b lim: 35 exec/s: 44 rss: 73Mb L: 12/26 MS: 1 EraseBytes- 00:07:56.694 [2024-07-15 16:18:42.226900] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.694 [2024-07-15 16:18:42.226927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.694 [2024-07-15 16:18:42.226984] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.694 [2024-07-15 16:18:42.227000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.694 #45 NEW cov: 12142 ft: 14418 corp: 28/449b lim: 35 exec/s: 45 rss: 73Mb L: 24/26 MS: 1 ChangeBinInt- 00:07:56.952 [2024-07-15 16:18:42.276789] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.952 [2024-07-15 16:18:42.276816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.952 #46 NEW cov: 12142 ft: 14475 corp: 29/461b lim: 35 exec/s: 46 rss: 73Mb L: 12/26 MS: 1 CrossOver- 00:07:56.952 [2024-07-15 16:18:42.316947] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.953 [2024-07-15 16:18:42.316973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.953 [2024-07-15 16:18:42.317032] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.953 [2024-07-15 16:18:42.317046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.953 #47 NEW cov: 12142 ft: 14483 corp: 30/477b lim: 35 exec/s: 47 rss: 74Mb L: 16/26 MS: 1 CopyPart- 00:07:56.953 #48 NEW cov: 12142 ft: 14516 corp: 31/486b lim: 35 exec/s: 48 rss: 74Mb L: 9/26 MS: 1 PersAutoDict- DE: "\266\006\3619\021\247'\000"- 00:07:56.953 [2024-07-15 16:18:42.407569] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.953 [2024-07-15 16:18:42.407597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.953 [2024-07-15 16:18:42.407655] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.953 [2024-07-15 16:18:42.407670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.953 [2024-07-15 16:18:42.407724] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.953 [2024-07-15 16:18:42.407738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.953 [2024-07-15 16:18:42.407798] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.953 [2024-07-15 16:18:42.407812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:56.953 [2024-07-15 16:18:42.407866] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.953 [2024-07-15 16:18:42.407880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:56.953 #49 NEW cov: 12142 ft: 14978 corp: 32/521b lim: 35 exec/s: 49 rss: 74Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:07:56.953 [2024-07-15 16:18:42.457403] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.953 [2024-07-15 16:18:42.457429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.953 #50 NEW cov: 12142 ft: 14999 corp: 33/541b lim: 35 exec/s: 50 rss: 74Mb L: 20/35 MS: 1 ChangeByte- 00:07:56.953 [2024-07-15 16:18:42.497380] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000005a4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.953 [2024-07-15 16:18:42.497406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.953 #51 NEW cov: 12142 ft: 15033 corp: 34/550b lim: 35 exec/s: 51 rss: 74Mb L: 9/35 MS: 1 CrossOver- 00:07:57.212 [2024-07-15 16:18:42.537538] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000005a4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.212 [2024-07-15 16:18:42.537579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.212 #52 NEW cov: 12142 ft: 15040 corp: 35/562b lim: 35 exec/s: 52 rss: 74Mb L: 12/35 MS: 1 PersAutoDict- DE: "\001\000\000\000"- 00:07:57.212 [2024-07-15 16:18:42.587930] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.212 [2024-07-15 16:18:42.587956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.212 [2024-07-15 16:18:42.588016] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.212 [2024-07-15 16:18:42.588031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.212 #53 NEW cov: 12142 ft: 15046 corp: 36/585b lim: 35 exec/s: 53 rss: 74Mb L: 23/35 MS: 1 InsertRepeatedBytes- 00:07:57.212 [2024-07-15 16:18:42.627998] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000005a4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.212 [2024-07-15 16:18:42.628024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.212 [2024-07-15 16:18:42.628080] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.212 [2024-07-15 16:18:42.628094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.212 [2024-07-15 16:18:42.628152] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.212 [2024-07-15 16:18:42.628167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.212 #54 NEW cov: 12142 ft: 15055 corp: 37/607b lim: 35 exec/s: 54 rss: 74Mb L: 22/35 MS: 1 InsertRepeatedBytes- 00:07:57.212 [2024-07-15 16:18:42.667846] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.212 [2024-07-15 16:18:42.667876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.212 #55 NEW cov: 12142 ft: 15060 corp: 38/615b lim: 35 exec/s: 55 rss: 74Mb L: 8/35 MS: 1 EraseBytes- 00:07:57.212 [2024-07-15 16:18:42.708260] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.212 [2024-07-15 16:18:42.708286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.212 [2024-07-15 16:18:42.708345] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.212 [2024-07-15 16:18:42.708360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.212 #56 NEW cov: 12142 ft: 15063 corp: 39/636b lim: 35 exec/s: 56 rss: 74Mb L: 21/35 MS: 1 InsertByte- 00:07:57.212 [2024-07-15 16:18:42.758124] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.212 [2024-07-15 16:18:42.758149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.212 #57 NEW cov: 12142 ft: 15077 corp: 40/647b lim: 35 exec/s: 28 rss: 74Mb L: 11/35 MS: 1 EraseBytes- 00:07:57.212 #57 DONE cov: 12142 ft: 15077 corp: 40/647b lim: 35 exec/s: 28 rss: 74Mb 00:07:57.212 ###### Recommended dictionary. ###### 00:07:57.212 "\266\006\3619\021\247'\000" # Uses: 1 00:07:57.212 "\001\000\000\000" # Uses: 1 00:07:57.212 ###### End of recommended dictionary. ###### 00:07:57.212 Done 57 runs in 2 second(s) 00:07:57.471 16:18:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_15.conf /var/tmp/suppress_nvmf_fuzz 00:07:57.471 16:18:42 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:57.471 16:18:42 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:57.471 16:18:42 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 16 1 0x1 00:07:57.471 16:18:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=16 00:07:57.471 16:18:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:57.471 16:18:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:57.471 16:18:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:57.471 16:18:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_16.conf 00:07:57.471 16:18:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:57.471 16:18:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:57.471 16:18:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 16 00:07:57.471 16:18:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4416 00:07:57.471 16:18:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:57.471 16:18:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' 00:07:57.471 16:18:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4416"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:57.471 16:18:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:57.471 16:18:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:57.471 16:18:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' -c /tmp/fuzz_json_16.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 -Z 16 00:07:57.471 [2024-07-15 16:18:42.961117] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:07:57.471 [2024-07-15 16:18:42.961202] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1520432 ] 00:07:57.471 EAL: No free 2048 kB hugepages reported on node 1 00:07:57.730 [2024-07-15 16:18:43.160450] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.730 [2024-07-15 16:18:43.232554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.730 [2024-07-15 16:18:43.291961] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:57.730 [2024-07-15 16:18:43.308204] tcp.c: 993:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4416 *** 00:07:57.989 INFO: Running with entropic power schedule (0xFF, 100). 00:07:57.989 INFO: Seed: 2002839330 00:07:57.989 INFO: Loaded 1 modules (357850 inline 8-bit counters): 357850 [0x29ab30c, 0x2a028e6), 00:07:57.989 INFO: Loaded 1 PC tables (357850 PCs): 357850 [0x2a028e8,0x2f78688), 00:07:57.989 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:57.989 INFO: A corpus is not provided, starting from an empty corpus 00:07:57.989 #2 INITED exec/s: 0 rss: 65Mb 00:07:57.989 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:57.989 This may also happen if the target rejected all inputs we tried so far 00:07:57.989 [2024-07-15 16:18:43.363526] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.989 [2024-07-15 16:18:43.363568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.989 [2024-07-15 16:18:43.363625] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.989 [2024-07-15 16:18:43.363642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.247 NEW_FUNC[1/697]: 0x49a940 in fuzz_nvm_read_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:519 00:07:58.247 NEW_FUNC[2/697]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:58.247 #24 NEW cov: 11975 ft: 11971 corp: 2/55b lim: 105 exec/s: 0 rss: 72Mb L: 54/54 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:07:58.247 [2024-07-15 16:18:43.716258] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16842752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.248 [2024-07-15 16:18:43.716309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.248 [2024-07-15 16:18:43.716413] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.248 [2024-07-15 16:18:43.716439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.248 NEW_FUNC[1/1]: 0x133f350 in nvmf_transport_poll_group_poll /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/transport.c:727 00:07:58.248 #29 NEW cov: 12118 ft: 12636 corp: 3/99b lim: 105 exec/s: 0 rss: 72Mb L: 44/54 MS: 5 ChangeByte-ChangeBit-ChangeBinInt-CopyPart-CrossOver- 00:07:58.248 [2024-07-15 16:18:43.776233] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.248 [2024-07-15 16:18:43.776260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.248 [2024-07-15 16:18:43.776328] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.248 [2024-07-15 16:18:43.776347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.248 #30 NEW cov: 12124 ft: 12895 corp: 4/150b lim: 105 exec/s: 0 rss: 72Mb L: 51/54 MS: 1 CrossOver- 00:07:58.508 [2024-07-15 16:18:43.826382] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16842752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.508 [2024-07-15 16:18:43.826413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.508 [2024-07-15 16:18:43.826484] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.508 [2024-07-15 16:18:43.826501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.508 #31 NEW cov: 12209 ft: 13190 corp: 5/208b lim: 105 exec/s: 0 rss: 73Mb L: 58/58 MS: 1 CrossOver- 00:07:58.508 [2024-07-15 16:18:43.886585] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.508 [2024-07-15 16:18:43.886611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.508 [2024-07-15 16:18:43.886692] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.508 [2024-07-15 16:18:43.886710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.508 #32 NEW cov: 12209 ft: 13258 corp: 6/262b lim: 105 exec/s: 0 rss: 73Mb L: 54/58 MS: 1 ShuffleBytes- 00:07:58.508 [2024-07-15 16:18:43.946880] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16842752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.508 [2024-07-15 16:18:43.946906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.508 [2024-07-15 16:18:43.946968] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.508 [2024-07-15 16:18:43.946985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.508 #33 NEW cov: 12209 ft: 13339 corp: 7/320b lim: 105 exec/s: 0 rss: 73Mb L: 58/58 MS: 1 CopyPart- 00:07:58.508 [2024-07-15 16:18:44.007209] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16842752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.508 [2024-07-15 16:18:44.007237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.508 [2024-07-15 16:18:44.007316] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.508 [2024-07-15 16:18:44.007332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.508 #34 NEW cov: 12209 ft: 13375 corp: 8/364b lim: 105 exec/s: 0 rss: 73Mb L: 44/58 MS: 1 ShuffleBytes- 00:07:58.508 [2024-07-15 16:18:44.057676] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16842752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.508 [2024-07-15 16:18:44.057703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.508 [2024-07-15 16:18:44.057820] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:12038856544016727846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.508 [2024-07-15 16:18:44.057843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.508 [2024-07-15 16:18:44.057938] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.508 [2024-07-15 16:18:44.057954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:58.508 #35 NEW cov: 12209 ft: 13718 corp: 9/430b lim: 105 exec/s: 0 rss: 73Mb L: 66/66 MS: 1 CMP- DE: "\377&\247\022\234>\317\356"- 00:07:58.769 [2024-07-15 16:18:44.107686] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:12948890935017668608 len:46004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.769 [2024-07-15 16:18:44.107714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.769 [2024-07-15 16:18:44.107827] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:3003121664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.769 [2024-07-15 16:18:44.107845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.769 #36 NEW cov: 12209 ft: 13742 corp: 10/492b lim: 105 exec/s: 0 rss: 73Mb L: 62/66 MS: 1 InsertRepeatedBytes- 00:07:58.769 [2024-07-15 16:18:44.158110] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16842752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.769 [2024-07-15 16:18:44.158139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.769 [2024-07-15 16:18:44.158219] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:12038856544016727846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.769 [2024-07-15 16:18:44.158238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.769 [2024-07-15 16:18:44.158298] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.769 [2024-07-15 16:18:44.158316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:58.769 #37 NEW cov: 12209 ft: 13838 corp: 11/558b lim: 105 exec/s: 0 rss: 73Mb L: 66/66 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\001"- 00:07:58.769 [2024-07-15 16:18:44.218052] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167837952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.769 [2024-07-15 16:18:44.218079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.769 [2024-07-15 16:18:44.218134] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.769 [2024-07-15 16:18:44.218154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.769 NEW_FUNC[1/1]: 0x1a7eaf0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:58.769 #39 NEW cov: 12232 ft: 13871 corp: 12/616b lim: 105 exec/s: 0 rss: 73Mb L: 58/66 MS: 2 CopyPart-CrossOver- 00:07:58.769 [2024-07-15 16:18:44.268069] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.769 [2024-07-15 16:18:44.268096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.769 #45 NEW cov: 12232 ft: 14306 corp: 13/650b lim: 105 exec/s: 0 rss: 73Mb L: 34/66 MS: 1 EraseBytes- 00:07:58.769 [2024-07-15 16:18:44.318257] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.769 [2024-07-15 16:18:44.318284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.084 #46 NEW cov: 12232 ft: 14333 corp: 14/688b lim: 105 exec/s: 46 rss: 73Mb L: 38/66 MS: 1 EraseBytes- 00:07:59.084 [2024-07-15 16:18:44.378941] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16842752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.084 [2024-07-15 16:18:44.378969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.084 [2024-07-15 16:18:44.379032] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:17870001846429417472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.084 [2024-07-15 16:18:44.379048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.084 #47 NEW cov: 12232 ft: 14367 corp: 15/732b lim: 105 exec/s: 47 rss: 73Mb L: 44/66 MS: 1 ChangeBinInt- 00:07:59.085 [2024-07-15 16:18:44.439187] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16842752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.085 [2024-07-15 16:18:44.439215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.085 [2024-07-15 16:18:44.439278] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.085 [2024-07-15 16:18:44.439295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.085 #48 NEW cov: 12232 ft: 14383 corp: 16/776b lim: 105 exec/s: 48 rss: 73Mb L: 44/66 MS: 1 ShuffleBytes- 00:07:59.085 [2024-07-15 16:18:44.489510] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:72057594054770688 len:2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.085 [2024-07-15 16:18:44.489540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.085 [2024-07-15 16:18:44.489597] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:3003121664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.085 [2024-07-15 16:18:44.489616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.085 #49 NEW cov: 12232 ft: 14433 corp: 17/838b lim: 105 exec/s: 49 rss: 73Mb L: 62/66 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\001"- 00:07:59.085 [2024-07-15 16:18:44.549781] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:479056374071296 len:46004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.085 [2024-07-15 16:18:44.549807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.085 [2024-07-15 16:18:44.549866] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.085 [2024-07-15 16:18:44.549884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.085 #50 NEW cov: 12232 ft: 14453 corp: 18/894b lim: 105 exec/s: 50 rss: 74Mb L: 56/66 MS: 1 EraseBytes- 00:07:59.085 [2024-07-15 16:18:44.610152] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:257 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.085 [2024-07-15 16:18:44.610182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.085 [2024-07-15 16:18:44.610264] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.085 [2024-07-15 16:18:44.610282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.085 #51 NEW cov: 12232 ft: 14466 corp: 19/953b lim: 105 exec/s: 51 rss: 74Mb L: 59/66 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\001"- 00:07:59.426 [2024-07-15 16:18:44.660533] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.426 [2024-07-15 16:18:44.660562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.426 [2024-07-15 16:18:44.660642] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.426 [2024-07-15 16:18:44.660663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.426 #54 NEW cov: 12232 ft: 14492 corp: 20/1005b lim: 105 exec/s: 54 rss: 74Mb L: 52/66 MS: 3 CrossOver-CopyPart-CrossOver- 00:07:59.426 [2024-07-15 16:18:44.710796] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16842752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.426 [2024-07-15 16:18:44.710826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.426 [2024-07-15 16:18:44.710887] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:17592186044416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.426 [2024-07-15 16:18:44.710907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.426 #55 NEW cov: 12232 ft: 14504 corp: 21/1049b lim: 105 exec/s: 55 rss: 74Mb L: 44/66 MS: 1 ChangeBit- 00:07:59.426 [2024-07-15 16:18:44.771669] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16842752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.426 [2024-07-15 16:18:44.771702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.426 [2024-07-15 16:18:44.771756] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.426 [2024-07-15 16:18:44.771774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.426 [2024-07-15 16:18:44.771848] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:3488481280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.426 [2024-07-15 16:18:44.771868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:59.426 #56 NEW cov: 12232 ft: 14537 corp: 22/1115b lim: 105 exec/s: 56 rss: 74Mb L: 66/66 MS: 1 PersAutoDict- DE: "\377&\247\022\234>\317\356"- 00:07:59.426 [2024-07-15 16:18:44.821432] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16842752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.426 [2024-07-15 16:18:44.821461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.426 #57 NEW cov: 12232 ft: 14616 corp: 23/1155b lim: 105 exec/s: 57 rss: 74Mb L: 40/66 MS: 1 CrossOver- 00:07:59.426 [2024-07-15 16:18:44.871784] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:4294967296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.426 [2024-07-15 16:18:44.871817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.426 [2024-07-15 16:18:44.871882] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.426 [2024-07-15 16:18:44.871901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.426 #58 NEW cov: 12232 ft: 14630 corp: 24/1214b lim: 105 exec/s: 58 rss: 74Mb L: 59/66 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\001"- 00:07:59.426 [2024-07-15 16:18:44.942346] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:72057594054770688 len:2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.426 [2024-07-15 16:18:44.942376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.426 [2024-07-15 16:18:44.942458] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:3003121664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.426 [2024-07-15 16:18:44.942483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.426 #59 NEW cov: 12232 ft: 14684 corp: 25/1276b lim: 105 exec/s: 59 rss: 74Mb L: 62/66 MS: 1 ShuffleBytes- 00:07:59.426 [2024-07-15 16:18:44.993047] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16842752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.426 [2024-07-15 16:18:44.993077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.426 [2024-07-15 16:18:44.993146] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:17592186044416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.426 [2024-07-15 16:18:44.993164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.426 [2024-07-15 16:18:44.993220] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:6004234345560363859 len:21332 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.426 [2024-07-15 16:18:44.993240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:59.426 [2024-07-15 16:18:44.993327] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:6004234345560363859 len:21332 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.426 [2024-07-15 16:18:44.993347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:59.686 #65 NEW cov: 12232 ft: 15199 corp: 26/1370b lim: 105 exec/s: 65 rss: 74Mb L: 94/94 MS: 1 InsertRepeatedBytes- 00:07:59.686 [2024-07-15 16:18:45.062729] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16842752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.686 [2024-07-15 16:18:45.062759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.686 [2024-07-15 16:18:45.062856] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:17870001846429417472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.686 [2024-07-15 16:18:45.062875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.686 #66 NEW cov: 12232 ft: 15215 corp: 27/1422b lim: 105 exec/s: 66 rss: 74Mb L: 52/94 MS: 1 PersAutoDict- DE: "\377&\247\022\234>\317\356"- 00:07:59.686 [2024-07-15 16:18:45.122761] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.686 [2024-07-15 16:18:45.122790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.686 #67 NEW cov: 12232 ft: 15241 corp: 28/1456b lim: 105 exec/s: 67 rss: 74Mb L: 34/94 MS: 1 ChangeBinInt- 00:07:59.686 [2024-07-15 16:18:45.183212] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167837952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.686 [2024-07-15 16:18:45.183239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.686 [2024-07-15 16:18:45.183311] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.686 [2024-07-15 16:18:45.183328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.686 #68 NEW cov: 12232 ft: 15250 corp: 29/1514b lim: 105 exec/s: 68 rss: 74Mb L: 58/94 MS: 1 ChangeByte- 00:07:59.686 [2024-07-15 16:18:45.243281] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.686 [2024-07-15 16:18:45.243309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.945 #69 NEW cov: 12232 ft: 15291 corp: 30/1544b lim: 105 exec/s: 69 rss: 74Mb L: 30/94 MS: 1 EraseBytes- 00:07:59.945 [2024-07-15 16:18:45.303688] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:72057594054770688 len:2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.945 [2024-07-15 16:18:45.303716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.946 [2024-07-15 16:18:45.303774] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:3003121664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.946 [2024-07-15 16:18:45.303793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.946 #70 NEW cov: 12232 ft: 15292 corp: 31/1606b lim: 105 exec/s: 70 rss: 74Mb L: 62/94 MS: 1 ChangeBinInt- 00:07:59.946 [2024-07-15 16:18:45.354437] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167837952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.946 [2024-07-15 16:18:45.354465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.946 [2024-07-15 16:18:45.354542] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.946 [2024-07-15 16:18:45.354561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.946 [2024-07-15 16:18:45.354636] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:651061555391695113 len:2314 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.946 [2024-07-15 16:18:45.354653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:59.946 [2024-07-15 16:18:45.354734] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:651061555542690057 len:2314 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.946 [2024-07-15 16:18:45.354752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:59.946 #71 NEW cov: 12232 ft: 15309 corp: 32/1703b lim: 105 exec/s: 35 rss: 74Mb L: 97/97 MS: 1 InsertRepeatedBytes- 00:07:59.946 #71 DONE cov: 12232 ft: 15309 corp: 32/1703b lim: 105 exec/s: 35 rss: 74Mb 00:07:59.946 ###### Recommended dictionary. ###### 00:07:59.946 "\377&\247\022\234>\317\356" # Uses: 2 00:07:59.946 "\001\000\000\000\000\000\000\001" # Uses: 5 00:07:59.946 ###### End of recommended dictionary. ###### 00:07:59.946 Done 71 runs in 2 second(s) 00:07:59.946 16:18:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_16.conf /var/tmp/suppress_nvmf_fuzz 00:07:59.946 16:18:45 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:59.946 16:18:45 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:59.946 16:18:45 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 17 1 0x1 00:07:59.946 16:18:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=17 00:07:59.946 16:18:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:59.946 16:18:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:59.946 16:18:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:07:59.946 16:18:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_17.conf 00:07:59.946 16:18:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:59.946 16:18:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:59.946 16:18:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 17 00:07:59.946 16:18:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4417 00:07:59.946 16:18:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:07:59.946 16:18:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' 00:07:59.946 16:18:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4417"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:00.205 16:18:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:00.205 16:18:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:00.205 16:18:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' -c /tmp/fuzz_json_17.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 -Z 17 00:08:00.206 [2024-07-15 16:18:45.551442] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:08:00.206 [2024-07-15 16:18:45.551516] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1520793 ] 00:08:00.206 EAL: No free 2048 kB hugepages reported on node 1 00:08:00.206 [2024-07-15 16:18:45.752780] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.465 [2024-07-15 16:18:45.827994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.465 [2024-07-15 16:18:45.887639] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:00.465 [2024-07-15 16:18:45.903824] tcp.c: 993:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4417 *** 00:08:00.465 INFO: Running with entropic power schedule (0xFF, 100). 00:08:00.465 INFO: Seed: 301878373 00:08:00.465 INFO: Loaded 1 modules (357850 inline 8-bit counters): 357850 [0x29ab30c, 0x2a028e6), 00:08:00.465 INFO: Loaded 1 PC tables (357850 PCs): 357850 [0x2a028e8,0x2f78688), 00:08:00.465 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:08:00.465 INFO: A corpus is not provided, starting from an empty corpus 00:08:00.465 #2 INITED exec/s: 0 rss: 65Mb 00:08:00.465 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:00.465 This may also happen if the target rejected all inputs we tried so far 00:08:00.465 [2024-07-15 16:18:45.953236] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.465 [2024-07-15 16:18:45.953267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.465 [2024-07-15 16:18:45.953312] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.465 [2024-07-15 16:18:45.953327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.465 [2024-07-15 16:18:45.953385] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.465 [2024-07-15 16:18:45.953403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:00.725 NEW_FUNC[1/699]: 0x49dcc0 in fuzz_nvm_write_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:540 00:08:00.725 NEW_FUNC[2/699]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:00.725 #3 NEW cov: 12009 ft: 12010 corp: 2/90b lim: 120 exec/s: 0 rss: 72Mb L: 89/89 MS: 1 InsertRepeatedBytes- 00:08:00.725 [2024-07-15 16:18:46.284083] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.725 [2024-07-15 16:18:46.284127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.725 [2024-07-15 16:18:46.284179] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.725 [2024-07-15 16:18:46.284198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.725 [2024-07-15 16:18:46.284247] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.725 [2024-07-15 16:18:46.284263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:00.725 [2024-07-15 16:18:46.284316] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.725 [2024-07-15 16:18:46.284330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:00.984 #4 NEW cov: 12139 ft: 12873 corp: 3/197b lim: 120 exec/s: 0 rss: 72Mb L: 107/107 MS: 1 CopyPart- 00:08:00.984 [2024-07-15 16:18:46.343710] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:15360 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.984 [2024-07-15 16:18:46.343741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.984 #8 NEW cov: 12145 ft: 13912 corp: 4/229b lim: 120 exec/s: 0 rss: 72Mb L: 32/107 MS: 4 CrossOver-ChangeBit-ChangeByte-CrossOver- 00:08:00.984 [2024-07-15 16:18:46.384418] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.984 [2024-07-15 16:18:46.384445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.984 [2024-07-15 16:18:46.384498] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.984 [2024-07-15 16:18:46.384512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.984 [2024-07-15 16:18:46.384583] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.984 [2024-07-15 16:18:46.384600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:00.984 [2024-07-15 16:18:46.384651] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.984 [2024-07-15 16:18:46.384679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:00.984 [2024-07-15 16:18:46.384730] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.984 [2024-07-15 16:18:46.384746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:00.984 #9 NEW cov: 12230 ft: 14229 corp: 5/349b lim: 120 exec/s: 0 rss: 72Mb L: 120/120 MS: 1 InsertRepeatedBytes- 00:08:00.984 [2024-07-15 16:18:46.433951] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073695133695 len:65340 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.984 [2024-07-15 16:18:46.433977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.984 #10 NEW cov: 12230 ft: 14293 corp: 6/382b lim: 120 exec/s: 0 rss: 72Mb L: 33/120 MS: 1 InsertByte- 00:08:00.984 [2024-07-15 16:18:46.484698] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.985 [2024-07-15 16:18:46.484723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.985 [2024-07-15 16:18:46.484772] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.985 [2024-07-15 16:18:46.484787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.985 [2024-07-15 16:18:46.484838] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.985 [2024-07-15 16:18:46.484853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:00.985 [2024-07-15 16:18:46.484904] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.985 [2024-07-15 16:18:46.484920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:00.985 [2024-07-15 16:18:46.484970] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.985 [2024-07-15 16:18:46.484985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:00.985 #11 NEW cov: 12230 ft: 14356 corp: 7/502b lim: 120 exec/s: 0 rss: 73Mb L: 120/120 MS: 1 ChangeByte- 00:08:00.985 [2024-07-15 16:18:46.534259] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446565952811433983 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.985 [2024-07-15 16:18:46.534285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.244 #17 NEW cov: 12230 ft: 14546 corp: 8/536b lim: 120 exec/s: 0 rss: 73Mb L: 34/120 MS: 1 InsertByte- 00:08:01.244 [2024-07-15 16:18:46.584696] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.244 [2024-07-15 16:18:46.584723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.244 [2024-07-15 16:18:46.584768] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.244 [2024-07-15 16:18:46.584784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.244 [2024-07-15 16:18:46.584834] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.244 [2024-07-15 16:18:46.584849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:01.244 #18 NEW cov: 12230 ft: 14554 corp: 9/628b lim: 120 exec/s: 0 rss: 73Mb L: 92/120 MS: 1 InsertRepeatedBytes- 00:08:01.244 [2024-07-15 16:18:46.625010] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.244 [2024-07-15 16:18:46.625036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.244 [2024-07-15 16:18:46.625085] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.244 [2024-07-15 16:18:46.625101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.244 [2024-07-15 16:18:46.625150] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.244 [2024-07-15 16:18:46.625165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:01.244 [2024-07-15 16:18:46.625219] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:12037839444221296679 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.245 [2024-07-15 16:18:46.625235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:01.245 #24 NEW cov: 12230 ft: 14598 corp: 10/735b lim: 120 exec/s: 0 rss: 73Mb L: 107/120 MS: 1 CMP- DE: "\000'\247\016\3771Y\324"- 00:08:01.245 [2024-07-15 16:18:46.664926] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.245 [2024-07-15 16:18:46.664953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.245 [2024-07-15 16:18:46.664999] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.245 [2024-07-15 16:18:46.665015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.245 [2024-07-15 16:18:46.665066] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709549567 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.245 [2024-07-15 16:18:46.665082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:01.245 #25 NEW cov: 12230 ft: 14681 corp: 11/827b lim: 120 exec/s: 0 rss: 73Mb L: 92/120 MS: 1 ChangeBit- 00:08:01.245 [2024-07-15 16:18:46.715063] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65281 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.245 [2024-07-15 16:18:46.715089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.245 [2024-07-15 16:18:46.715137] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.245 [2024-07-15 16:18:46.715152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.245 [2024-07-15 16:18:46.715203] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709549567 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.245 [2024-07-15 16:18:46.715218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:01.245 #26 NEW cov: 12230 ft: 14724 corp: 12/919b lim: 120 exec/s: 0 rss: 73Mb L: 92/120 MS: 1 PersAutoDict- DE: "\000'\247\016\3771Y\324"- 00:08:01.245 [2024-07-15 16:18:46.765240] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.245 [2024-07-15 16:18:46.765270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.245 [2024-07-15 16:18:46.765308] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.245 [2024-07-15 16:18:46.765324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.245 [2024-07-15 16:18:46.765376] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.245 [2024-07-15 16:18:46.765392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:01.245 #27 NEW cov: 12230 ft: 14787 corp: 13/1011b lim: 120 exec/s: 0 rss: 73Mb L: 92/120 MS: 1 ChangeByte- 00:08:01.245 [2024-07-15 16:18:46.805495] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.245 [2024-07-15 16:18:46.805525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.245 [2024-07-15 16:18:46.805583] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.245 [2024-07-15 16:18:46.805598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.245 [2024-07-15 16:18:46.805648] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.245 [2024-07-15 16:18:46.805665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:01.245 [2024-07-15 16:18:46.805715] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:15360 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.245 [2024-07-15 16:18:46.805731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:01.505 NEW_FUNC[1/1]: 0x1a7eaf0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:01.505 #28 NEW cov: 12253 ft: 14849 corp: 14/1115b lim: 120 exec/s: 0 rss: 73Mb L: 104/120 MS: 1 CopyPart- 00:08:01.505 [2024-07-15 16:18:46.865643] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.505 [2024-07-15 16:18:46.865669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.505 [2024-07-15 16:18:46.865718] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.505 [2024-07-15 16:18:46.865734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.505 [2024-07-15 16:18:46.865782] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.505 [2024-07-15 16:18:46.865798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:01.505 [2024-07-15 16:18:46.865849] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.505 [2024-07-15 16:18:46.865863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:01.505 #29 NEW cov: 12253 ft: 14874 corp: 15/1227b lim: 120 exec/s: 0 rss: 73Mb L: 112/120 MS: 1 CopyPart- 00:08:01.505 [2024-07-15 16:18:46.905617] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.505 [2024-07-15 16:18:46.905643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.505 [2024-07-15 16:18:46.905690] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.505 [2024-07-15 16:18:46.905705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.505 [2024-07-15 16:18:46.905756] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.505 [2024-07-15 16:18:46.905771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:01.505 #30 NEW cov: 12253 ft: 14909 corp: 16/1312b lim: 120 exec/s: 30 rss: 73Mb L: 85/120 MS: 1 EraseBytes- 00:08:01.505 [2024-07-15 16:18:46.955765] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.505 [2024-07-15 16:18:46.955792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.505 [2024-07-15 16:18:46.955838] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.505 [2024-07-15 16:18:46.955853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.505 [2024-07-15 16:18:46.955904] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.505 [2024-07-15 16:18:46.955919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:01.505 #31 NEW cov: 12253 ft: 14927 corp: 17/1404b lim: 120 exec/s: 31 rss: 73Mb L: 92/120 MS: 1 CopyPart- 00:08:01.505 [2024-07-15 16:18:46.995586] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073695133695 len:65340 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.505 [2024-07-15 16:18:46.995611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.505 #32 NEW cov: 12253 ft: 14946 corp: 18/1437b lim: 120 exec/s: 32 rss: 73Mb L: 33/120 MS: 1 ChangeBinInt- 00:08:01.505 [2024-07-15 16:18:47.036267] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.505 [2024-07-15 16:18:47.036292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.505 [2024-07-15 16:18:47.036345] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.505 [2024-07-15 16:18:47.036360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.505 [2024-07-15 16:18:47.036410] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.505 [2024-07-15 16:18:47.036425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:01.505 [2024-07-15 16:18:47.036475] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.505 [2024-07-15 16:18:47.036490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:01.505 [2024-07-15 16:18:47.036544] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.505 [2024-07-15 16:18:47.036577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:01.505 #33 NEW cov: 12253 ft: 14954 corp: 19/1557b lim: 120 exec/s: 33 rss: 73Mb L: 120/120 MS: 1 CrossOver- 00:08:01.505 [2024-07-15 16:18:47.076115] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.505 [2024-07-15 16:18:47.076141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.505 [2024-07-15 16:18:47.076187] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.505 [2024-07-15 16:18:47.076205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.505 [2024-07-15 16:18:47.076254] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.505 [2024-07-15 16:18:47.076270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:01.765 #34 NEW cov: 12253 ft: 14964 corp: 20/1649b lim: 120 exec/s: 34 rss: 73Mb L: 92/120 MS: 1 ChangeBinInt- 00:08:01.765 [2024-07-15 16:18:47.116216] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.765 [2024-07-15 16:18:47.116244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.765 [2024-07-15 16:18:47.116280] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.765 [2024-07-15 16:18:47.116296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.765 [2024-07-15 16:18:47.116348] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.765 [2024-07-15 16:18:47.116364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:01.765 #35 NEW cov: 12253 ft: 14969 corp: 21/1742b lim: 120 exec/s: 35 rss: 73Mb L: 93/120 MS: 1 InsertByte- 00:08:01.765 [2024-07-15 16:18:47.166502] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.765 [2024-07-15 16:18:47.166533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.765 [2024-07-15 16:18:47.166602] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.765 [2024-07-15 16:18:47.166618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.765 [2024-07-15 16:18:47.166682] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.765 [2024-07-15 16:18:47.166697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:01.765 [2024-07-15 16:18:47.166748] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.765 [2024-07-15 16:18:47.166763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:01.765 #36 NEW cov: 12253 ft: 14982 corp: 22/1849b lim: 120 exec/s: 36 rss: 73Mb L: 107/120 MS: 1 CopyPart- 00:08:01.765 [2024-07-15 16:18:47.206717] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.765 [2024-07-15 16:18:47.206743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.765 [2024-07-15 16:18:47.206798] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.765 [2024-07-15 16:18:47.206814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.765 [2024-07-15 16:18:47.206864] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.765 [2024-07-15 16:18:47.206882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:01.765 [2024-07-15 16:18:47.206934] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.765 [2024-07-15 16:18:47.206949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:01.765 [2024-07-15 16:18:47.207000] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.765 [2024-07-15 16:18:47.207015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:01.765 #37 NEW cov: 12253 ft: 15017 corp: 23/1969b lim: 120 exec/s: 37 rss: 73Mb L: 120/120 MS: 1 CopyPart- 00:08:01.765 [2024-07-15 16:18:47.256913] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.765 [2024-07-15 16:18:47.256940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.765 [2024-07-15 16:18:47.256994] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.765 [2024-07-15 16:18:47.257010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.765 [2024-07-15 16:18:47.257060] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.765 [2024-07-15 16:18:47.257077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:01.765 [2024-07-15 16:18:47.257128] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.765 [2024-07-15 16:18:47.257144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:01.765 [2024-07-15 16:18:47.257196] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.765 [2024-07-15 16:18:47.257213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:01.765 #38 NEW cov: 12253 ft: 15024 corp: 24/2089b lim: 120 exec/s: 38 rss: 74Mb L: 120/120 MS: 1 ChangeByte- 00:08:01.765 [2024-07-15 16:18:47.296715] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.765 [2024-07-15 16:18:47.296742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.765 [2024-07-15 16:18:47.296785] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.765 [2024-07-15 16:18:47.296801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.765 [2024-07-15 16:18:47.296852] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.765 [2024-07-15 16:18:47.296868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:01.765 #39 NEW cov: 12253 ft: 15101 corp: 25/2181b lim: 120 exec/s: 39 rss: 74Mb L: 92/120 MS: 1 CrossOver- 00:08:01.765 [2024-07-15 16:18:47.336960] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.765 [2024-07-15 16:18:47.336989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.765 [2024-07-15 16:18:47.337034] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.765 [2024-07-15 16:18:47.337050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.765 [2024-07-15 16:18:47.337100] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:42767 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.765 [2024-07-15 16:18:47.337117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:01.765 [2024-07-15 16:18:47.337166] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.765 [2024-07-15 16:18:47.337180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:02.025 #40 NEW cov: 12253 ft: 15129 corp: 26/2288b lim: 120 exec/s: 40 rss: 74Mb L: 107/120 MS: 1 PersAutoDict- DE: "\000'\247\016\3771Y\324"- 00:08:02.025 [2024-07-15 16:18:47.376934] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.025 [2024-07-15 16:18:47.376961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.025 [2024-07-15 16:18:47.376999] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.025 [2024-07-15 16:18:47.377015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.025 [2024-07-15 16:18:47.377063] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446743678572560383 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.025 [2024-07-15 16:18:47.377079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:02.025 #41 NEW cov: 12253 ft: 15143 corp: 27/2378b lim: 120 exec/s: 41 rss: 74Mb L: 90/120 MS: 1 InsertByte- 00:08:02.025 [2024-07-15 16:18:47.417184] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.025 [2024-07-15 16:18:47.417211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.025 [2024-07-15 16:18:47.417261] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.025 [2024-07-15 16:18:47.417277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.025 [2024-07-15 16:18:47.417325] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.025 [2024-07-15 16:18:47.417341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:02.025 [2024-07-15 16:18:47.417391] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65340 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.025 [2024-07-15 16:18:47.417407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:02.025 #42 NEW cov: 12253 ft: 15150 corp: 28/2483b lim: 120 exec/s: 42 rss: 74Mb L: 105/120 MS: 1 CrossOver- 00:08:02.025 [2024-07-15 16:18:47.466914] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073695133695 len:65281 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.025 [2024-07-15 16:18:47.466942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.025 #43 NEW cov: 12253 ft: 15212 corp: 29/2528b lim: 120 exec/s: 43 rss: 74Mb L: 45/120 MS: 1 CopyPart- 00:08:02.025 [2024-07-15 16:18:47.517192] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073695133695 len:65340 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.025 [2024-07-15 16:18:47.517219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.025 [2024-07-15 16:18:47.517256] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6438275382588823897 len:22874 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.025 [2024-07-15 16:18:47.517272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.025 #44 NEW cov: 12253 ft: 15570 corp: 30/2584b lim: 120 exec/s: 44 rss: 74Mb L: 56/120 MS: 1 InsertRepeatedBytes- 00:08:02.025 [2024-07-15 16:18:47.557596] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.025 [2024-07-15 16:18:47.557624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.025 [2024-07-15 16:18:47.557669] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.025 [2024-07-15 16:18:47.557685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.025 [2024-07-15 16:18:47.557739] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.025 [2024-07-15 16:18:47.557756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:02.025 [2024-07-15 16:18:47.557808] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.025 [2024-07-15 16:18:47.557823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:02.025 #45 NEW cov: 12253 ft: 15620 corp: 31/2691b lim: 120 exec/s: 45 rss: 74Mb L: 107/120 MS: 1 ShuffleBytes- 00:08:02.025 [2024-07-15 16:18:47.597711] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.025 [2024-07-15 16:18:47.597739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.025 [2024-07-15 16:18:47.597801] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.025 [2024-07-15 16:18:47.597818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.025 [2024-07-15 16:18:47.597868] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:281471115604863 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.025 [2024-07-15 16:18:47.597883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:02.025 [2024-07-15 16:18:47.597935] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446528569430507519 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.025 [2024-07-15 16:18:47.597949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:02.284 #46 NEW cov: 12253 ft: 15652 corp: 32/2791b lim: 120 exec/s: 46 rss: 74Mb L: 100/120 MS: 1 CMP- DE: "\2550\031\3343\177\000\000"- 00:08:02.284 [2024-07-15 16:18:47.647402] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446565952811433983 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.284 [2024-07-15 16:18:47.647428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.284 #47 NEW cov: 12253 ft: 15662 corp: 33/2827b lim: 120 exec/s: 47 rss: 74Mb L: 36/120 MS: 1 CMP- DE: "\000\004"- 00:08:02.284 [2024-07-15 16:18:47.697978] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.284 [2024-07-15 16:18:47.698005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.284 [2024-07-15 16:18:47.698055] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.284 [2024-07-15 16:18:47.698071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.284 [2024-07-15 16:18:47.698120] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18409026426830323711 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.284 [2024-07-15 16:18:47.698135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:02.284 [2024-07-15 16:18:47.698187] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:15360 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.284 [2024-07-15 16:18:47.698203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:02.284 #48 NEW cov: 12253 ft: 15666 corp: 34/2931b lim: 120 exec/s: 48 rss: 74Mb L: 104/120 MS: 1 ChangeByte- 00:08:02.284 [2024-07-15 16:18:47.747689] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446565952811433983 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.284 [2024-07-15 16:18:47.747716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.284 #49 NEW cov: 12253 ft: 15681 corp: 35/2962b lim: 120 exec/s: 49 rss: 74Mb L: 31/120 MS: 1 EraseBytes- 00:08:02.285 [2024-07-15 16:18:47.788199] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.285 [2024-07-15 16:18:47.788230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.285 [2024-07-15 16:18:47.788272] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.285 [2024-07-15 16:18:47.788290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.285 [2024-07-15 16:18:47.788339] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:281471115604863 len:65458 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.285 [2024-07-15 16:18:47.788354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:02.285 [2024-07-15 16:18:47.788408] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446528569430507519 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.285 [2024-07-15 16:18:47.788425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:02.285 #50 NEW cov: 12253 ft: 15701 corp: 36/3062b lim: 120 exec/s: 50 rss: 74Mb L: 100/120 MS: 1 ChangeByte- 00:08:02.285 [2024-07-15 16:18:47.838249] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.285 [2024-07-15 16:18:47.838275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.285 [2024-07-15 16:18:47.838321] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.285 [2024-07-15 16:18:47.838336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.285 [2024-07-15 16:18:47.838388] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.285 [2024-07-15 16:18:47.838404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:02.285 #51 NEW cov: 12253 ft: 15741 corp: 37/3154b lim: 120 exec/s: 51 rss: 74Mb L: 92/120 MS: 1 CrossOver- 00:08:02.544 [2024-07-15 16:18:47.878499] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.544 [2024-07-15 16:18:47.878525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.544 [2024-07-15 16:18:47.878608] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.545 [2024-07-15 16:18:47.878624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.545 [2024-07-15 16:18:47.878674] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.545 [2024-07-15 16:18:47.878690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:02.545 [2024-07-15 16:18:47.878741] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.545 [2024-07-15 16:18:47.878757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:02.545 #52 NEW cov: 12253 ft: 15749 corp: 38/3269b lim: 120 exec/s: 52 rss: 74Mb L: 115/120 MS: 1 PersAutoDict- DE: "\2550\031\3343\177\000\000"- 00:08:02.545 [2024-07-15 16:18:47.928160] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073695133695 len:65340 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.545 [2024-07-15 16:18:47.928186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.545 #53 NEW cov: 12253 ft: 15751 corp: 39/3302b lim: 120 exec/s: 26 rss: 74Mb L: 33/120 MS: 1 PersAutoDict- DE: "\000'\247\016\3771Y\324"- 00:08:02.545 #53 DONE cov: 12253 ft: 15751 corp: 39/3302b lim: 120 exec/s: 26 rss: 74Mb 00:08:02.545 ###### Recommended dictionary. ###### 00:08:02.545 "\000'\247\016\3771Y\324" # Uses: 3 00:08:02.545 "\2550\031\3343\177\000\000" # Uses: 1 00:08:02.545 "\000\004" # Uses: 0 00:08:02.545 ###### End of recommended dictionary. ###### 00:08:02.545 Done 53 runs in 2 second(s) 00:08:02.545 16:18:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_17.conf /var/tmp/suppress_nvmf_fuzz 00:08:02.545 16:18:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:02.545 16:18:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:02.545 16:18:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 18 1 0x1 00:08:02.545 16:18:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=18 00:08:02.545 16:18:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:02.545 16:18:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:02.545 16:18:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:08:02.545 16:18:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_18.conf 00:08:02.545 16:18:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:02.545 16:18:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:02.545 16:18:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 18 00:08:02.545 16:18:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4418 00:08:02.545 16:18:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:08:02.545 16:18:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' 00:08:02.545 16:18:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4418"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:02.545 16:18:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:02.545 16:18:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:02.545 16:18:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' -c /tmp/fuzz_json_18.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 -Z 18 00:08:02.803 [2024-07-15 16:18:48.133159] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:08:02.803 [2024-07-15 16:18:48.133232] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1521152 ] 00:08:02.803 EAL: No free 2048 kB hugepages reported on node 1 00:08:02.803 [2024-07-15 16:18:48.329539] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.062 [2024-07-15 16:18:48.402977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.062 [2024-07-15 16:18:48.462631] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:03.062 [2024-07-15 16:18:48.478825] tcp.c: 993:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4418 *** 00:08:03.062 INFO: Running with entropic power schedule (0xFF, 100). 00:08:03.062 INFO: Seed: 2877876085 00:08:03.062 INFO: Loaded 1 modules (357850 inline 8-bit counters): 357850 [0x29ab30c, 0x2a028e6), 00:08:03.062 INFO: Loaded 1 PC tables (357850 PCs): 357850 [0x2a028e8,0x2f78688), 00:08:03.062 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:08:03.062 INFO: A corpus is not provided, starting from an empty corpus 00:08:03.062 #2 INITED exec/s: 0 rss: 64Mb 00:08:03.062 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:03.062 This may also happen if the target rejected all inputs we tried so far 00:08:03.062 [2024-07-15 16:18:48.534014] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:03.062 [2024-07-15 16:18:48.534044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.320 NEW_FUNC[1/697]: 0x4a15b0 in fuzz_nvm_write_zeroes_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:562 00:08:03.320 NEW_FUNC[2/697]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:03.320 #10 NEW cov: 11945 ft: 11944 corp: 2/29b lim: 100 exec/s: 0 rss: 72Mb L: 28/28 MS: 3 ShuffleBytes-ShuffleBytes-InsertRepeatedBytes- 00:08:03.320 [2024-07-15 16:18:48.874921] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:03.320 [2024-07-15 16:18:48.874971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.579 #16 NEW cov: 12082 ft: 12581 corp: 3/57b lim: 100 exec/s: 0 rss: 72Mb L: 28/28 MS: 1 ChangeBinInt- 00:08:03.579 [2024-07-15 16:18:48.924909] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:03.579 [2024-07-15 16:18:48.924939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.579 #17 NEW cov: 12088 ft: 12851 corp: 4/94b lim: 100 exec/s: 0 rss: 72Mb L: 37/37 MS: 1 InsertRepeatedBytes- 00:08:03.579 [2024-07-15 16:18:48.965365] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:03.579 [2024-07-15 16:18:48.965393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.579 [2024-07-15 16:18:48.965438] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:03.579 [2024-07-15 16:18:48.965453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.579 [2024-07-15 16:18:48.965501] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:03.579 [2024-07-15 16:18:48.965515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:03.579 [2024-07-15 16:18:48.965587] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:03.579 [2024-07-15 16:18:48.965602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:03.579 #18 NEW cov: 12173 ft: 13501 corp: 5/189b lim: 100 exec/s: 0 rss: 72Mb L: 95/95 MS: 1 InsertRepeatedBytes- 00:08:03.579 [2024-07-15 16:18:49.015170] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:03.579 [2024-07-15 16:18:49.015197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.579 #19 NEW cov: 12173 ft: 13708 corp: 6/226b lim: 100 exec/s: 0 rss: 72Mb L: 37/95 MS: 1 CMP- DE: "\377\377\377\377\377\377\002\367"- 00:08:03.579 [2024-07-15 16:18:49.065339] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:03.579 [2024-07-15 16:18:49.065366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.579 #20 NEW cov: 12173 ft: 13756 corp: 7/263b lim: 100 exec/s: 0 rss: 72Mb L: 37/95 MS: 1 ChangeByte- 00:08:03.579 [2024-07-15 16:18:49.105441] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:03.579 [2024-07-15 16:18:49.105468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.579 #21 NEW cov: 12173 ft: 13856 corp: 8/291b lim: 100 exec/s: 0 rss: 72Mb L: 28/95 MS: 1 ShuffleBytes- 00:08:03.579 [2024-07-15 16:18:49.145483] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:03.579 [2024-07-15 16:18:49.145508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.839 #22 NEW cov: 12173 ft: 13933 corp: 9/319b lim: 100 exec/s: 0 rss: 72Mb L: 28/95 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\002\367"- 00:08:03.839 [2024-07-15 16:18:49.195990] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:03.839 [2024-07-15 16:18:49.196015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.839 [2024-07-15 16:18:49.196068] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:03.839 [2024-07-15 16:18:49.196081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.839 [2024-07-15 16:18:49.196131] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:03.839 [2024-07-15 16:18:49.196149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:03.839 [2024-07-15 16:18:49.196201] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:03.839 [2024-07-15 16:18:49.196215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:03.839 #23 NEW cov: 12173 ft: 13963 corp: 10/414b lim: 100 exec/s: 0 rss: 73Mb L: 95/95 MS: 1 CopyPart- 00:08:03.839 [2024-07-15 16:18:49.246132] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:03.839 [2024-07-15 16:18:49.246157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.839 [2024-07-15 16:18:49.246204] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:03.839 [2024-07-15 16:18:49.246219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.839 [2024-07-15 16:18:49.246267] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:03.839 [2024-07-15 16:18:49.246282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:03.839 [2024-07-15 16:18:49.246333] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:03.839 [2024-07-15 16:18:49.246347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:03.839 #24 NEW cov: 12173 ft: 14005 corp: 11/512b lim: 100 exec/s: 0 rss: 73Mb L: 98/98 MS: 1 CopyPart- 00:08:03.839 [2024-07-15 16:18:49.296258] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:03.839 [2024-07-15 16:18:49.296283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.839 [2024-07-15 16:18:49.296336] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:03.839 [2024-07-15 16:18:49.296350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.839 [2024-07-15 16:18:49.296398] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:03.839 [2024-07-15 16:18:49.296412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:03.839 [2024-07-15 16:18:49.296461] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:03.839 [2024-07-15 16:18:49.296476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:03.839 #25 NEW cov: 12173 ft: 14046 corp: 12/607b lim: 100 exec/s: 0 rss: 73Mb L: 95/98 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\002\367"- 00:08:03.839 [2024-07-15 16:18:49.336156] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:03.839 [2024-07-15 16:18:49.336181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.839 [2024-07-15 16:18:49.336224] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:03.839 [2024-07-15 16:18:49.336238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.839 #26 NEW cov: 12173 ft: 14325 corp: 13/653b lim: 100 exec/s: 0 rss: 73Mb L: 46/98 MS: 1 CrossOver- 00:08:03.839 [2024-07-15 16:18:49.376154] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:03.839 [2024-07-15 16:18:49.376179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.839 NEW_FUNC[1/1]: 0x1a7eaf0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:03.839 #27 NEW cov: 12196 ft: 14346 corp: 14/690b lim: 100 exec/s: 0 rss: 73Mb L: 37/98 MS: 1 ChangeBinInt- 00:08:04.098 [2024-07-15 16:18:49.426300] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.098 [2024-07-15 16:18:49.426327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.098 #28 NEW cov: 12196 ft: 14433 corp: 15/727b lim: 100 exec/s: 0 rss: 73Mb L: 37/98 MS: 1 ShuffleBytes- 00:08:04.098 [2024-07-15 16:18:49.466749] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.098 [2024-07-15 16:18:49.466774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.098 [2024-07-15 16:18:49.466827] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:04.098 [2024-07-15 16:18:49.466841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.098 [2024-07-15 16:18:49.466890] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:04.098 [2024-07-15 16:18:49.466904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.098 [2024-07-15 16:18:49.466953] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:04.098 [2024-07-15 16:18:49.466967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:04.098 #29 NEW cov: 12196 ft: 14455 corp: 16/824b lim: 100 exec/s: 0 rss: 73Mb L: 97/98 MS: 1 CopyPart- 00:08:04.098 [2024-07-15 16:18:49.506773] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.098 [2024-07-15 16:18:49.506799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.098 [2024-07-15 16:18:49.506835] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:04.098 [2024-07-15 16:18:49.506850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.098 [2024-07-15 16:18:49.506903] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:04.098 [2024-07-15 16:18:49.506918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.098 #30 NEW cov: 12196 ft: 14698 corp: 17/893b lim: 100 exec/s: 30 rss: 73Mb L: 69/98 MS: 1 InsertRepeatedBytes- 00:08:04.098 [2024-07-15 16:18:49.546942] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.098 [2024-07-15 16:18:49.546968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.098 [2024-07-15 16:18:49.547020] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:04.098 [2024-07-15 16:18:49.547034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.098 [2024-07-15 16:18:49.547082] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:04.098 [2024-07-15 16:18:49.547096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.098 [2024-07-15 16:18:49.547146] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:04.098 [2024-07-15 16:18:49.547160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:04.098 #31 NEW cov: 12196 ft: 14714 corp: 18/988b lim: 100 exec/s: 31 rss: 73Mb L: 95/98 MS: 1 CrossOver- 00:08:04.098 [2024-07-15 16:18:49.597104] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.098 [2024-07-15 16:18:49.597129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.098 [2024-07-15 16:18:49.597180] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:04.098 [2024-07-15 16:18:49.597195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.098 [2024-07-15 16:18:49.597244] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:04.098 [2024-07-15 16:18:49.597258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.098 [2024-07-15 16:18:49.597310] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:04.098 [2024-07-15 16:18:49.597325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:04.098 #32 NEW cov: 12196 ft: 14739 corp: 19/1083b lim: 100 exec/s: 32 rss: 73Mb L: 95/98 MS: 1 ShuffleBytes- 00:08:04.098 [2024-07-15 16:18:49.637117] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.098 [2024-07-15 16:18:49.637143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.098 [2024-07-15 16:18:49.637187] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:04.098 [2024-07-15 16:18:49.637201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.098 [2024-07-15 16:18:49.637249] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:04.098 [2024-07-15 16:18:49.637263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.098 #33 NEW cov: 12196 ft: 14766 corp: 20/1155b lim: 100 exec/s: 33 rss: 73Mb L: 72/98 MS: 1 InsertRepeatedBytes- 00:08:04.358 [2024-07-15 16:18:49.687479] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.358 [2024-07-15 16:18:49.687505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.358 [2024-07-15 16:18:49.687580] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:04.358 [2024-07-15 16:18:49.687594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.358 [2024-07-15 16:18:49.687647] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:04.358 [2024-07-15 16:18:49.687662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.358 [2024-07-15 16:18:49.687712] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:04.358 [2024-07-15 16:18:49.687727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:04.358 [2024-07-15 16:18:49.687776] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:08:04.358 [2024-07-15 16:18:49.687791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:04.358 #34 NEW cov: 12196 ft: 14813 corp: 21/1255b lim: 100 exec/s: 34 rss: 73Mb L: 100/100 MS: 1 InsertRepeatedBytes- 00:08:04.358 [2024-07-15 16:18:49.727379] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.358 [2024-07-15 16:18:49.727405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.358 [2024-07-15 16:18:49.727443] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:04.358 [2024-07-15 16:18:49.727457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.358 [2024-07-15 16:18:49.727509] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:04.358 [2024-07-15 16:18:49.727523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.358 #35 NEW cov: 12196 ft: 14865 corp: 22/1315b lim: 100 exec/s: 35 rss: 73Mb L: 60/100 MS: 1 EraseBytes- 00:08:04.358 [2024-07-15 16:18:49.777275] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.358 [2024-07-15 16:18:49.777299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.358 #36 NEW cov: 12196 ft: 14922 corp: 23/1343b lim: 100 exec/s: 36 rss: 73Mb L: 28/100 MS: 1 ChangeBit- 00:08:04.358 [2024-07-15 16:18:49.817722] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.358 [2024-07-15 16:18:49.817746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.358 [2024-07-15 16:18:49.817794] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:04.358 [2024-07-15 16:18:49.817806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.358 [2024-07-15 16:18:49.817856] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:04.358 [2024-07-15 16:18:49.817870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.358 [2024-07-15 16:18:49.817920] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:04.358 [2024-07-15 16:18:49.817934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:04.358 #37 NEW cov: 12196 ft: 14927 corp: 24/1438b lim: 100 exec/s: 37 rss: 73Mb L: 95/100 MS: 1 ChangeByte- 00:08:04.358 [2024-07-15 16:18:49.867533] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.358 [2024-07-15 16:18:49.867558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.358 #38 NEW cov: 12196 ft: 14964 corp: 25/1466b lim: 100 exec/s: 38 rss: 73Mb L: 28/100 MS: 1 ChangeBinInt- 00:08:04.358 [2024-07-15 16:18:49.908018] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.358 [2024-07-15 16:18:49.908043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.358 [2024-07-15 16:18:49.908090] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:04.358 [2024-07-15 16:18:49.908104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.358 [2024-07-15 16:18:49.908152] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:04.358 [2024-07-15 16:18:49.908165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.358 [2024-07-15 16:18:49.908217] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:04.358 [2024-07-15 16:18:49.908231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:04.358 #39 NEW cov: 12196 ft: 14981 corp: 26/1561b lim: 100 exec/s: 39 rss: 73Mb L: 95/100 MS: 1 ChangeBinInt- 00:08:04.617 [2024-07-15 16:18:49.947762] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.617 [2024-07-15 16:18:49.947791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.617 #40 NEW cov: 12196 ft: 15039 corp: 27/1588b lim: 100 exec/s: 40 rss: 73Mb L: 27/100 MS: 1 EraseBytes- 00:08:04.617 [2024-07-15 16:18:49.997887] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.617 [2024-07-15 16:18:49.997911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.617 #41 NEW cov: 12196 ft: 15053 corp: 28/1608b lim: 100 exec/s: 41 rss: 73Mb L: 20/100 MS: 1 CrossOver- 00:08:04.617 [2024-07-15 16:18:50.038069] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.617 [2024-07-15 16:18:50.038097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.617 #42 NEW cov: 12196 ft: 15064 corp: 29/1628b lim: 100 exec/s: 42 rss: 73Mb L: 20/100 MS: 1 ChangeBinInt- 00:08:04.617 [2024-07-15 16:18:50.088287] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.617 [2024-07-15 16:18:50.088319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.618 [2024-07-15 16:18:50.088373] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:04.618 [2024-07-15 16:18:50.088387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.618 #43 NEW cov: 12196 ft: 15069 corp: 30/1674b lim: 100 exec/s: 43 rss: 73Mb L: 46/100 MS: 1 CopyPart- 00:08:04.618 [2024-07-15 16:18:50.128585] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.618 [2024-07-15 16:18:50.128611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.618 [2024-07-15 16:18:50.128658] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:04.618 [2024-07-15 16:18:50.128672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.618 [2024-07-15 16:18:50.128721] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:04.618 [2024-07-15 16:18:50.128734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.618 [2024-07-15 16:18:50.128784] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:04.618 [2024-07-15 16:18:50.128798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:04.618 #44 NEW cov: 12196 ft: 15073 corp: 31/1771b lim: 100 exec/s: 44 rss: 73Mb L: 97/100 MS: 1 CMP- DE: "\377\377"- 00:08:04.618 [2024-07-15 16:18:50.168409] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.618 [2024-07-15 16:18:50.168435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.877 #45 NEW cov: 12196 ft: 15084 corp: 32/1808b lim: 100 exec/s: 45 rss: 74Mb L: 37/100 MS: 1 ChangeByte- 00:08:04.877 [2024-07-15 16:18:50.218540] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.877 [2024-07-15 16:18:50.218566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.877 #46 NEW cov: 12196 ft: 15098 corp: 33/1836b lim: 100 exec/s: 46 rss: 74Mb L: 28/100 MS: 1 ChangeBit- 00:08:04.877 [2024-07-15 16:18:50.259100] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.877 [2024-07-15 16:18:50.259126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.877 [2024-07-15 16:18:50.259174] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:04.877 [2024-07-15 16:18:50.259188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.877 [2024-07-15 16:18:50.259237] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:04.877 [2024-07-15 16:18:50.259252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.877 [2024-07-15 16:18:50.259304] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:04.877 [2024-07-15 16:18:50.259320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:04.877 [2024-07-15 16:18:50.259369] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:08:04.877 [2024-07-15 16:18:50.259383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:04.877 #52 NEW cov: 12196 ft: 15130 corp: 34/1936b lim: 100 exec/s: 52 rss: 74Mb L: 100/100 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\002\367"- 00:08:04.877 [2024-07-15 16:18:50.308774] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.877 [2024-07-15 16:18:50.308800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.877 #53 NEW cov: 12196 ft: 15154 corp: 35/1956b lim: 100 exec/s: 53 rss: 74Mb L: 20/100 MS: 1 ChangeBit- 00:08:04.877 [2024-07-15 16:18:50.349319] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.877 [2024-07-15 16:18:50.349345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.877 [2024-07-15 16:18:50.349398] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:04.877 [2024-07-15 16:18:50.349412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.877 [2024-07-15 16:18:50.349462] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:04.877 [2024-07-15 16:18:50.349476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.877 [2024-07-15 16:18:50.349532] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:04.877 [2024-07-15 16:18:50.349547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:04.877 [2024-07-15 16:18:50.349595] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:08:04.877 [2024-07-15 16:18:50.349610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:04.877 #54 NEW cov: 12196 ft: 15165 corp: 36/2056b lim: 100 exec/s: 54 rss: 74Mb L: 100/100 MS: 1 ChangeBinInt- 00:08:04.877 [2024-07-15 16:18:50.399233] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.877 [2024-07-15 16:18:50.399259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.877 [2024-07-15 16:18:50.399305] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:04.877 [2024-07-15 16:18:50.399319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.877 [2024-07-15 16:18:50.399368] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:04.877 [2024-07-15 16:18:50.399383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.877 #55 NEW cov: 12196 ft: 15195 corp: 37/2125b lim: 100 exec/s: 55 rss: 74Mb L: 69/100 MS: 1 ChangeBit- 00:08:04.877 [2024-07-15 16:18:50.449172] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.877 [2024-07-15 16:18:50.449198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.136 #56 NEW cov: 12196 ft: 15204 corp: 38/2162b lim: 100 exec/s: 56 rss: 74Mb L: 37/100 MS: 1 PersAutoDict- DE: "\377\377"- 00:08:05.136 [2024-07-15 16:18:50.499328] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:05.136 [2024-07-15 16:18:50.499355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.136 #57 NEW cov: 12196 ft: 15208 corp: 39/2182b lim: 100 exec/s: 28 rss: 74Mb L: 20/100 MS: 1 ChangeBinInt- 00:08:05.136 #57 DONE cov: 12196 ft: 15208 corp: 39/2182b lim: 100 exec/s: 28 rss: 74Mb 00:08:05.136 ###### Recommended dictionary. ###### 00:08:05.136 "\377\377\377\377\377\377\002\367" # Uses: 3 00:08:05.136 "\377\377" # Uses: 1 00:08:05.136 ###### End of recommended dictionary. ###### 00:08:05.136 Done 57 runs in 2 second(s) 00:08:05.136 16:18:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_18.conf /var/tmp/suppress_nvmf_fuzz 00:08:05.136 16:18:50 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:05.136 16:18:50 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:05.136 16:18:50 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 19 1 0x1 00:08:05.136 16:18:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=19 00:08:05.136 16:18:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:05.136 16:18:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:05.136 16:18:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:08:05.136 16:18:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_19.conf 00:08:05.136 16:18:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:05.136 16:18:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:05.136 16:18:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 19 00:08:05.136 16:18:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4419 00:08:05.136 16:18:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:08:05.136 16:18:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' 00:08:05.137 16:18:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4419"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:05.137 16:18:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:05.137 16:18:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:05.137 16:18:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' -c /tmp/fuzz_json_19.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 -Z 19 00:08:05.137 [2024-07-15 16:18:50.704413] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:08:05.137 [2024-07-15 16:18:50.704484] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1521508 ] 00:08:05.395 EAL: No free 2048 kB hugepages reported on node 1 00:08:05.396 [2024-07-15 16:18:50.911363] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.655 [2024-07-15 16:18:50.985556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.655 [2024-07-15 16:18:51.045138] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:05.655 [2024-07-15 16:18:51.061336] tcp.c: 993:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 *** 00:08:05.655 INFO: Running with entropic power schedule (0xFF, 100). 00:08:05.655 INFO: Seed: 1163891063 00:08:05.655 INFO: Loaded 1 modules (357850 inline 8-bit counters): 357850 [0x29ab30c, 0x2a028e6), 00:08:05.655 INFO: Loaded 1 PC tables (357850 PCs): 357850 [0x2a028e8,0x2f78688), 00:08:05.655 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:08:05.655 INFO: A corpus is not provided, starting from an empty corpus 00:08:05.655 #2 INITED exec/s: 0 rss: 65Mb 00:08:05.655 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:05.655 This may also happen if the target rejected all inputs we tried so far 00:08:05.655 [2024-07-15 16:18:51.109405] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:12297829382473034410 len:43691 00:08:05.655 [2024-07-15 16:18:51.109442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.655 [2024-07-15 16:18:51.109478] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:12297829382473034410 len:43691 00:08:05.655 [2024-07-15 16:18:51.109496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.915 NEW_FUNC[1/697]: 0x4a4570 in fuzz_nvm_write_uncorrectable_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:582 00:08:05.915 NEW_FUNC[2/697]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:05.915 #5 NEW cov: 11930 ft: 11931 corp: 2/28b lim: 50 exec/s: 0 rss: 72Mb L: 27/27 MS: 3 CrossOver-CopyPart-InsertRepeatedBytes- 00:08:05.915 [2024-07-15 16:18:51.470325] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:12297829382473034410 len:43691 00:08:05.915 [2024-07-15 16:18:51.470376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.916 [2024-07-15 16:18:51.470413] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:12297829382473034410 len:43691 00:08:05.916 [2024-07-15 16:18:51.470431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.175 #6 NEW cov: 12060 ft: 12449 corp: 3/52b lim: 50 exec/s: 0 rss: 72Mb L: 24/27 MS: 1 EraseBytes- 00:08:06.175 [2024-07-15 16:18:51.550351] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:12297829382473034410 len:43691 00:08:06.175 [2024-07-15 16:18:51.550386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.175 [2024-07-15 16:18:51.550435] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:12297829382473034410 len:43691 00:08:06.175 [2024-07-15 16:18:51.550454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.175 #7 NEW cov: 12066 ft: 12815 corp: 4/76b lim: 50 exec/s: 0 rss: 72Mb L: 24/27 MS: 1 ChangeBit- 00:08:06.175 [2024-07-15 16:18:51.630585] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:12297829382473034410 len:43691 00:08:06.175 [2024-07-15 16:18:51.630621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.175 [2024-07-15 16:18:51.630672] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:12297829382473034410 len:43691 00:08:06.175 [2024-07-15 16:18:51.630691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.175 #8 NEW cov: 12151 ft: 13036 corp: 5/100b lim: 50 exec/s: 0 rss: 72Mb L: 24/27 MS: 1 CopyPart- 00:08:06.175 [2024-07-15 16:18:51.690739] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:12297829382473034410 len:43691 00:08:06.175 [2024-07-15 16:18:51.690772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.175 [2024-07-15 16:18:51.690806] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:12297829382473034410 len:43691 00:08:06.175 [2024-07-15 16:18:51.690824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.175 #14 NEW cov: 12151 ft: 13095 corp: 6/124b lim: 50 exec/s: 0 rss: 72Mb L: 24/27 MS: 1 CopyPart- 00:08:06.175 [2024-07-15 16:18:51.740892] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:12297829382473034410 len:43691 00:08:06.175 [2024-07-15 16:18:51.740922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.175 [2024-07-15 16:18:51.740970] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:12297829382473012823 len:43691 00:08:06.175 [2024-07-15 16:18:51.740988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.435 #20 NEW cov: 12151 ft: 13146 corp: 7/151b lim: 50 exec/s: 0 rss: 72Mb L: 27/27 MS: 1 ChangeBinInt- 00:08:06.435 [2024-07-15 16:18:51.801084] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:12297881261383002794 len:55770 00:08:06.435 [2024-07-15 16:18:51.801114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.435 [2024-07-15 16:18:51.801161] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:15697817505862638041 len:55770 00:08:06.435 [2024-07-15 16:18:51.801179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.435 [2024-07-15 16:18:51.801208] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:12297736667014015658 len:43691 00:08:06.435 [2024-07-15 16:18:51.801225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.435 [2024-07-15 16:18:51.801254] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:12297829382473034410 len:43531 00:08:06.435 [2024-07-15 16:18:51.801270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:06.435 #21 NEW cov: 12151 ft: 13578 corp: 8/192b lim: 50 exec/s: 0 rss: 73Mb L: 41/41 MS: 1 InsertRepeatedBytes- 00:08:06.435 [2024-07-15 16:18:51.881200] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:12297829382470937258 len:43691 00:08:06.435 [2024-07-15 16:18:51.881232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.435 [2024-07-15 16:18:51.881266] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:12297829382473034410 len:43691 00:08:06.435 [2024-07-15 16:18:51.881291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.435 #22 NEW cov: 12151 ft: 13641 corp: 9/219b lim: 50 exec/s: 0 rss: 73Mb L: 27/41 MS: 1 ChangeBit- 00:08:06.435 [2024-07-15 16:18:51.931365] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:12297829382473034410 len:43691 00:08:06.435 [2024-07-15 16:18:51.931396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.435 [2024-07-15 16:18:51.931434] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:12297829382473012823 len:22104 00:08:06.435 [2024-07-15 16:18:51.931454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.435 #23 NEW cov: 12151 ft: 13694 corp: 10/246b lim: 50 exec/s: 0 rss: 73Mb L: 27/41 MS: 1 CopyPart- 00:08:06.435 [2024-07-15 16:18:51.981429] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16981572994938350250 len:43691 00:08:06.435 [2024-07-15 16:18:51.981458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.435 [2024-07-15 16:18:51.981508] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:12297829382473034410 len:43691 00:08:06.435 [2024-07-15 16:18:51.981526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.695 NEW_FUNC[1/1]: 0x1a7eaf0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:06.695 #24 NEW cov: 12168 ft: 13747 corp: 11/271b lim: 50 exec/s: 0 rss: 73Mb L: 25/41 MS: 1 InsertByte- 00:08:06.695 [2024-07-15 16:18:52.061671] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:12297881261383002794 len:55770 00:08:06.695 [2024-07-15 16:18:52.061701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.695 [2024-07-15 16:18:52.061735] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:15697817505862638041 len:55770 00:08:06.695 [2024-07-15 16:18:52.061753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.695 #25 NEW cov: 12168 ft: 13776 corp: 12/300b lim: 50 exec/s: 25 rss: 73Mb L: 29/41 MS: 1 EraseBytes- 00:08:06.695 [2024-07-15 16:18:52.141872] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16981572994938350250 len:43691 00:08:06.695 [2024-07-15 16:18:52.141902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.695 #26 NEW cov: 12168 ft: 14141 corp: 13/319b lim: 50 exec/s: 26 rss: 73Mb L: 19/41 MS: 1 EraseBytes- 00:08:06.695 [2024-07-15 16:18:52.222098] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:12297829382473034410 len:43691 00:08:06.695 [2024-07-15 16:18:52.222129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.695 [2024-07-15 16:18:52.222178] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:12297829382473034410 len:43563 00:08:06.695 [2024-07-15 16:18:52.222196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.695 #27 NEW cov: 12168 ft: 14172 corp: 14/344b lim: 50 exec/s: 27 rss: 73Mb L: 25/41 MS: 1 InsertByte- 00:08:06.695 [2024-07-15 16:18:52.272257] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:12297829382473034410 len:43691 00:08:06.695 [2024-07-15 16:18:52.272288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.695 [2024-07-15 16:18:52.272323] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:12297829382473034410 len:43691 00:08:06.695 [2024-07-15 16:18:52.272342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.954 #28 NEW cov: 12168 ft: 14205 corp: 15/372b lim: 50 exec/s: 28 rss: 73Mb L: 28/41 MS: 1 CMP- DE: "\001@\000\000"- 00:08:06.954 [2024-07-15 16:18:52.352390] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16981572994938350250 len:43691 00:08:06.954 [2024-07-15 16:18:52.352422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.954 [2024-07-15 16:18:52.352454] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:12297829382473034410 len:43691 00:08:06.954 [2024-07-15 16:18:52.352471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.954 #29 NEW cov: 12168 ft: 14221 corp: 16/396b lim: 50 exec/s: 29 rss: 73Mb L: 24/41 MS: 1 CrossOver- 00:08:06.954 [2024-07-15 16:18:52.432641] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:4611686751435139585 len:60331 00:08:06.954 [2024-07-15 16:18:52.432671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.954 [2024-07-15 16:18:52.432719] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:12297829382473034410 len:43691 00:08:06.954 [2024-07-15 16:18:52.432737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.954 #30 NEW cov: 12168 ft: 14276 corp: 17/419b lim: 50 exec/s: 30 rss: 73Mb L: 23/41 MS: 1 PersAutoDict- DE: "\001@\000\000"- 00:08:06.954 [2024-07-15 16:18:52.482720] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:4611686748592799745 len:42 00:08:06.954 [2024-07-15 16:18:52.482750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.954 #34 NEW cov: 12168 ft: 14308 corp: 18/430b lim: 50 exec/s: 34 rss: 73Mb L: 11/41 MS: 4 PersAutoDict-InsertByte-CrossOver-PersAutoDict- DE: "\001@\000\000"-"\001@\000\000"- 00:08:07.214 [2024-07-15 16:18:52.542908] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:12297547907496323754 len:43691 00:08:07.214 [2024-07-15 16:18:52.542936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.214 [2024-07-15 16:18:52.542984] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:12297829382473034410 len:43691 00:08:07.214 [2024-07-15 16:18:52.543003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.214 #35 NEW cov: 12168 ft: 14341 corp: 19/457b lim: 50 exec/s: 35 rss: 73Mb L: 27/41 MS: 1 ChangeBinInt- 00:08:07.214 [2024-07-15 16:18:52.593045] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:12297829382473034666 len:43691 00:08:07.214 [2024-07-15 16:18:52.593075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.214 [2024-07-15 16:18:52.593108] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:12297829382473034410 len:43691 00:08:07.214 [2024-07-15 16:18:52.593126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.214 #36 NEW cov: 12168 ft: 14348 corp: 20/485b lim: 50 exec/s: 36 rss: 73Mb L: 28/41 MS: 1 ChangeBit- 00:08:07.214 [2024-07-15 16:18:52.673203] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16981572994938350250 len:43691 00:08:07.214 [2024-07-15 16:18:52.673235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.214 #37 NEW cov: 12168 ft: 14376 corp: 21/504b lim: 50 exec/s: 37 rss: 73Mb L: 19/41 MS: 1 ChangeByte- 00:08:07.214 [2024-07-15 16:18:52.733395] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:48109864281063424 len:43691 00:08:07.214 [2024-07-15 16:18:52.733424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.214 [2024-07-15 16:18:52.733476] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:12297829382473034410 len:43691 00:08:07.214 [2024-07-15 16:18:52.733495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.473 #38 NEW cov: 12168 ft: 14421 corp: 22/525b lim: 50 exec/s: 38 rss: 73Mb L: 21/41 MS: 1 EraseBytes- 00:08:07.473 [2024-07-15 16:18:52.813679] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:12297547907496323754 len:43691 00:08:07.473 [2024-07-15 16:18:52.813710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.473 [2024-07-15 16:18:52.813743] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:12297829382473034410 len:43691 00:08:07.473 [2024-07-15 16:18:52.813761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.473 [2024-07-15 16:18:52.813791] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:12250142833031948970 len:11 00:08:07.473 [2024-07-15 16:18:52.813807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:07.473 #39 NEW cov: 12168 ft: 14641 corp: 23/556b lim: 50 exec/s: 39 rss: 73Mb L: 31/41 MS: 1 PersAutoDict- DE: "\001@\000\000"- 00:08:07.473 [2024-07-15 16:18:52.893772] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:12297829382473034410 len:43691 00:08:07.473 [2024-07-15 16:18:52.893804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.473 #40 NEW cov: 12168 ft: 14645 corp: 24/570b lim: 50 exec/s: 40 rss: 73Mb L: 14/41 MS: 1 EraseBytes- 00:08:07.473 [2024-07-15 16:18:52.953997] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:48109864281063424 len:43691 00:08:07.473 [2024-07-15 16:18:52.954028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.473 [2024-07-15 16:18:52.954077] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:12297846974659078826 len:43691 00:08:07.473 [2024-07-15 16:18:52.954096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.473 #41 NEW cov: 12175 ft: 14672 corp: 25/591b lim: 50 exec/s: 41 rss: 73Mb L: 21/41 MS: 1 ChangeBit- 00:08:07.473 [2024-07-15 16:18:53.034175] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:12297829382473034410 len:43691 00:08:07.473 [2024-07-15 16:18:53.034206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.473 [2024-07-15 16:18:53.034255] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:12297829382473034410 len:43691 00:08:07.473 [2024-07-15 16:18:53.034273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.751 #42 NEW cov: 12175 ft: 14692 corp: 26/615b lim: 50 exec/s: 42 rss: 73Mb L: 24/41 MS: 1 ShuffleBytes- 00:08:07.751 [2024-07-15 16:18:53.084391] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:12297829382473034410 len:43691 00:08:07.751 [2024-07-15 16:18:53.084421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.751 [2024-07-15 16:18:53.084469] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:12297829382473034410 len:43691 00:08:07.751 [2024-07-15 16:18:53.084487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.751 [2024-07-15 16:18:53.084520] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:12297829382473034410 len:43691 00:08:07.751 [2024-07-15 16:18:53.084544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:07.751 [2024-07-15 16:18:53.084573] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:12297829382473034410 len:43691 00:08:07.751 [2024-07-15 16:18:53.084589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:07.751 #43 NEW cov: 12175 ft: 14715 corp: 27/663b lim: 50 exec/s: 21 rss: 73Mb L: 48/48 MS: 1 CrossOver- 00:08:07.751 #43 DONE cov: 12175 ft: 14715 corp: 27/663b lim: 50 exec/s: 21 rss: 73Mb 00:08:07.751 ###### Recommended dictionary. ###### 00:08:07.751 "\001@\000\000" # Uses: 4 00:08:07.751 ###### End of recommended dictionary. ###### 00:08:07.751 Done 43 runs in 2 second(s) 00:08:07.751 16:18:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_19.conf /var/tmp/suppress_nvmf_fuzz 00:08:07.751 16:18:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:07.752 16:18:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:07.752 16:18:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 20 1 0x1 00:08:07.752 16:18:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=20 00:08:07.752 16:18:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:07.752 16:18:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:07.752 16:18:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:08:07.752 16:18:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_20.conf 00:08:07.752 16:18:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:07.752 16:18:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:07.752 16:18:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 20 00:08:07.752 16:18:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4420 00:08:07.752 16:18:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:08:07.752 16:18:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' 00:08:07.752 16:18:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4420"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:07.752 16:18:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:07.752 16:18:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:07.752 16:18:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' -c /tmp/fuzz_json_20.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 -Z 20 00:08:07.752 [2024-07-15 16:18:53.307581] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:08:07.752 [2024-07-15 16:18:53.307652] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1521867 ] 00:08:08.016 EAL: No free 2048 kB hugepages reported on node 1 00:08:08.016 [2024-07-15 16:18:53.505204] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.016 [2024-07-15 16:18:53.577404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.275 [2024-07-15 16:18:53.637271] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:08.275 [2024-07-15 16:18:53.653462] tcp.c: 993:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:08:08.275 INFO: Running with entropic power schedule (0xFF, 100). 00:08:08.275 INFO: Seed: 3756940586 00:08:08.275 INFO: Loaded 1 modules (357850 inline 8-bit counters): 357850 [0x29ab30c, 0x2a028e6), 00:08:08.275 INFO: Loaded 1 PC tables (357850 PCs): 357850 [0x2a028e8,0x2f78688), 00:08:08.275 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:08:08.275 INFO: A corpus is not provided, starting from an empty corpus 00:08:08.275 #2 INITED exec/s: 0 rss: 64Mb 00:08:08.275 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:08.275 This may also happen if the target rejected all inputs we tried so far 00:08:08.275 [2024-07-15 16:18:53.698696] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:08.275 [2024-07-15 16:18:53.698728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.534 NEW_FUNC[1/699]: 0x4a6130 in fuzz_nvm_reservation_acquire_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:597 00:08:08.534 NEW_FUNC[2/699]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:08.534 #6 NEW cov: 11988 ft: 11989 corp: 2/26b lim: 90 exec/s: 0 rss: 72Mb L: 25/25 MS: 4 InsertByte-CopyPart-InsertByte-InsertRepeatedBytes- 00:08:08.534 [2024-07-15 16:18:54.039545] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:08.534 [2024-07-15 16:18:54.039599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.534 #7 NEW cov: 12118 ft: 12461 corp: 3/44b lim: 90 exec/s: 0 rss: 72Mb L: 18/25 MS: 1 EraseBytes- 00:08:08.534 [2024-07-15 16:18:54.089914] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:08.534 [2024-07-15 16:18:54.089943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.534 [2024-07-15 16:18:54.089982] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:08.534 [2024-07-15 16:18:54.089999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.534 [2024-07-15 16:18:54.090054] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:08.534 [2024-07-15 16:18:54.090070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:08.793 #8 NEW cov: 12124 ft: 13630 corp: 4/103b lim: 90 exec/s: 0 rss: 72Mb L: 59/59 MS: 1 InsertRepeatedBytes- 00:08:08.793 [2024-07-15 16:18:54.140054] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:08.793 [2024-07-15 16:18:54.140082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.793 [2024-07-15 16:18:54.140125] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:08.793 [2024-07-15 16:18:54.140141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.793 [2024-07-15 16:18:54.140197] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:08.793 [2024-07-15 16:18:54.140212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:08.793 #11 NEW cov: 12209 ft: 13870 corp: 5/158b lim: 90 exec/s: 0 rss: 72Mb L: 55/59 MS: 3 EraseBytes-CopyPart-InsertRepeatedBytes- 00:08:08.793 [2024-07-15 16:18:54.180191] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:08.793 [2024-07-15 16:18:54.180219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.793 [2024-07-15 16:18:54.180265] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:08.793 [2024-07-15 16:18:54.180283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.793 [2024-07-15 16:18:54.180337] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:08.793 [2024-07-15 16:18:54.180354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:08.793 #12 NEW cov: 12209 ft: 14025 corp: 6/213b lim: 90 exec/s: 0 rss: 72Mb L: 55/59 MS: 1 ShuffleBytes- 00:08:08.793 [2024-07-15 16:18:54.230316] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:08.793 [2024-07-15 16:18:54.230343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.793 [2024-07-15 16:18:54.230388] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:08.793 [2024-07-15 16:18:54.230404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.793 [2024-07-15 16:18:54.230461] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:08.793 [2024-07-15 16:18:54.230477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:08.793 #13 NEW cov: 12209 ft: 14148 corp: 7/272b lim: 90 exec/s: 0 rss: 72Mb L: 59/59 MS: 1 ChangeBinInt- 00:08:08.793 [2024-07-15 16:18:54.280433] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:08.793 [2024-07-15 16:18:54.280461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.793 [2024-07-15 16:18:54.280509] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:08.793 [2024-07-15 16:18:54.280526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.793 [2024-07-15 16:18:54.280592] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:08.793 [2024-07-15 16:18:54.280607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:08.793 #14 NEW cov: 12209 ft: 14205 corp: 8/327b lim: 90 exec/s: 0 rss: 72Mb L: 55/59 MS: 1 ShuffleBytes- 00:08:08.793 [2024-07-15 16:18:54.320556] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:08.793 [2024-07-15 16:18:54.320584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.793 [2024-07-15 16:18:54.320632] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:08.793 [2024-07-15 16:18:54.320648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.793 [2024-07-15 16:18:54.320701] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:08.793 [2024-07-15 16:18:54.320716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:08.793 #15 NEW cov: 12209 ft: 14244 corp: 9/382b lim: 90 exec/s: 0 rss: 73Mb L: 55/59 MS: 1 ChangeBit- 00:08:08.793 [2024-07-15 16:18:54.370576] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:08.793 [2024-07-15 16:18:54.370604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.793 [2024-07-15 16:18:54.370643] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:08.793 [2024-07-15 16:18:54.370663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.053 #16 NEW cov: 12209 ft: 14605 corp: 10/420b lim: 90 exec/s: 0 rss: 73Mb L: 38/59 MS: 1 CrossOver- 00:08:09.053 [2024-07-15 16:18:54.420559] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.053 [2024-07-15 16:18:54.420587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.053 #17 NEW cov: 12209 ft: 14663 corp: 11/438b lim: 90 exec/s: 0 rss: 73Mb L: 18/59 MS: 1 CrossOver- 00:08:09.053 [2024-07-15 16:18:54.460936] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.053 [2024-07-15 16:18:54.460963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.053 [2024-07-15 16:18:54.461012] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.053 [2024-07-15 16:18:54.461028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.053 [2024-07-15 16:18:54.461085] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:09.053 [2024-07-15 16:18:54.461101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.053 #18 NEW cov: 12209 ft: 14710 corp: 12/493b lim: 90 exec/s: 0 rss: 73Mb L: 55/59 MS: 1 ChangeByte- 00:08:09.053 [2024-07-15 16:18:54.500788] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.053 [2024-07-15 16:18:54.500815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.053 #24 NEW cov: 12209 ft: 14848 corp: 13/511b lim: 90 exec/s: 0 rss: 73Mb L: 18/59 MS: 1 ChangeByte- 00:08:09.053 [2024-07-15 16:18:54.551212] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.053 [2024-07-15 16:18:54.551240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.053 [2024-07-15 16:18:54.551277] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.053 [2024-07-15 16:18:54.551292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.053 [2024-07-15 16:18:54.551350] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:09.053 [2024-07-15 16:18:54.551366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.053 #28 NEW cov: 12209 ft: 14865 corp: 14/578b lim: 90 exec/s: 0 rss: 73Mb L: 67/67 MS: 4 EraseBytes-InsertByte-ShuffleBytes-InsertRepeatedBytes- 00:08:09.053 [2024-07-15 16:18:54.591300] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.053 [2024-07-15 16:18:54.591327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.053 [2024-07-15 16:18:54.591365] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.053 [2024-07-15 16:18:54.591381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.053 [2024-07-15 16:18:54.591434] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:09.053 [2024-07-15 16:18:54.591450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.053 NEW_FUNC[1/1]: 0x1a7eaf0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:09.053 #29 NEW cov: 12232 ft: 14925 corp: 15/633b lim: 90 exec/s: 0 rss: 73Mb L: 55/67 MS: 1 ChangeByte- 00:08:09.311 [2024-07-15 16:18:54.631460] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.311 [2024-07-15 16:18:54.631487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.311 [2024-07-15 16:18:54.631525] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.311 [2024-07-15 16:18:54.631547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.311 [2024-07-15 16:18:54.631601] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:09.311 [2024-07-15 16:18:54.631618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.311 #30 NEW cov: 12232 ft: 14947 corp: 16/688b lim: 90 exec/s: 0 rss: 73Mb L: 55/67 MS: 1 ShuffleBytes- 00:08:09.311 [2024-07-15 16:18:54.671524] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.311 [2024-07-15 16:18:54.671555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.311 [2024-07-15 16:18:54.671605] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.311 [2024-07-15 16:18:54.671621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.311 [2024-07-15 16:18:54.671677] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:09.311 [2024-07-15 16:18:54.671694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.311 #32 NEW cov: 12232 ft: 14972 corp: 17/744b lim: 90 exec/s: 32 rss: 73Mb L: 56/67 MS: 2 CopyPart-InsertRepeatedBytes- 00:08:09.311 [2024-07-15 16:18:54.711860] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.311 [2024-07-15 16:18:54.711887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.312 [2024-07-15 16:18:54.711938] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.312 [2024-07-15 16:18:54.711955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.312 [2024-07-15 16:18:54.712009] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:09.312 [2024-07-15 16:18:54.712025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.312 [2024-07-15 16:18:54.712079] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:09.312 [2024-07-15 16:18:54.712094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:09.312 #33 NEW cov: 12232 ft: 15340 corp: 18/823b lim: 90 exec/s: 33 rss: 73Mb L: 79/79 MS: 1 InsertRepeatedBytes- 00:08:09.312 [2024-07-15 16:18:54.751795] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.312 [2024-07-15 16:18:54.751822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.312 [2024-07-15 16:18:54.751860] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.312 [2024-07-15 16:18:54.751876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.312 [2024-07-15 16:18:54.751932] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:09.312 [2024-07-15 16:18:54.751950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.312 #34 NEW cov: 12232 ft: 15370 corp: 19/885b lim: 90 exec/s: 34 rss: 73Mb L: 62/79 MS: 1 EraseBytes- 00:08:09.312 [2024-07-15 16:18:54.801792] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.312 [2024-07-15 16:18:54.801818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.312 [2024-07-15 16:18:54.801873] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.312 [2024-07-15 16:18:54.801889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.312 #35 NEW cov: 12232 ft: 15380 corp: 20/928b lim: 90 exec/s: 35 rss: 73Mb L: 43/79 MS: 1 CrossOver- 00:08:09.312 [2024-07-15 16:18:54.852077] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.312 [2024-07-15 16:18:54.852104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.312 [2024-07-15 16:18:54.852142] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.312 [2024-07-15 16:18:54.852168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.312 [2024-07-15 16:18:54.852225] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:09.312 [2024-07-15 16:18:54.852242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.312 #36 NEW cov: 12232 ft: 15408 corp: 21/987b lim: 90 exec/s: 36 rss: 73Mb L: 59/79 MS: 1 CopyPart- 00:08:09.571 [2024-07-15 16:18:54.902229] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.571 [2024-07-15 16:18:54.902257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.571 [2024-07-15 16:18:54.902300] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.571 [2024-07-15 16:18:54.902317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.571 [2024-07-15 16:18:54.902374] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:09.571 [2024-07-15 16:18:54.902390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.571 #37 NEW cov: 12232 ft: 15441 corp: 22/1042b lim: 90 exec/s: 37 rss: 73Mb L: 55/79 MS: 1 ChangeBinInt- 00:08:09.571 [2024-07-15 16:18:54.952396] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.571 [2024-07-15 16:18:54.952424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.571 [2024-07-15 16:18:54.952462] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.571 [2024-07-15 16:18:54.952478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.571 [2024-07-15 16:18:54.952539] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:09.571 [2024-07-15 16:18:54.952556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.571 #38 NEW cov: 12232 ft: 15453 corp: 23/1097b lim: 90 exec/s: 38 rss: 73Mb L: 55/79 MS: 1 ShuffleBytes- 00:08:09.571 [2024-07-15 16:18:54.992305] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.571 [2024-07-15 16:18:54.992336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.571 [2024-07-15 16:18:54.992391] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.571 [2024-07-15 16:18:54.992408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.571 #39 NEW cov: 12232 ft: 15464 corp: 24/1146b lim: 90 exec/s: 39 rss: 73Mb L: 49/79 MS: 1 CrossOver- 00:08:09.571 [2024-07-15 16:18:55.032441] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.571 [2024-07-15 16:18:55.032467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.571 [2024-07-15 16:18:55.032517] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.571 [2024-07-15 16:18:55.032536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.571 #40 NEW cov: 12232 ft: 15476 corp: 25/1195b lim: 90 exec/s: 40 rss: 73Mb L: 49/79 MS: 1 ChangeBinInt- 00:08:09.571 [2024-07-15 16:18:55.082756] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.571 [2024-07-15 16:18:55.082785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.571 [2024-07-15 16:18:55.082823] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.571 [2024-07-15 16:18:55.082840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.571 [2024-07-15 16:18:55.082894] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:09.571 [2024-07-15 16:18:55.082911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.571 #41 NEW cov: 12232 ft: 15524 corp: 26/1258b lim: 90 exec/s: 41 rss: 73Mb L: 63/79 MS: 1 CMP- DE: "\224\030\024\\9\177\000\000"- 00:08:09.571 [2024-07-15 16:18:55.122676] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.571 [2024-07-15 16:18:55.122703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.571 [2024-07-15 16:18:55.122751] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.571 [2024-07-15 16:18:55.122767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.831 #42 NEW cov: 12232 ft: 15551 corp: 27/1307b lim: 90 exec/s: 42 rss: 74Mb L: 49/79 MS: 1 CrossOver- 00:08:09.831 [2024-07-15 16:18:55.173002] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.831 [2024-07-15 16:18:55.173030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.831 [2024-07-15 16:18:55.173068] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.831 [2024-07-15 16:18:55.173084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.831 [2024-07-15 16:18:55.173141] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:09.831 [2024-07-15 16:18:55.173154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.831 #43 NEW cov: 12232 ft: 15560 corp: 28/1363b lim: 90 exec/s: 43 rss: 74Mb L: 56/79 MS: 1 InsertByte- 00:08:09.831 [2024-07-15 16:18:55.212970] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.831 [2024-07-15 16:18:55.213000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.831 [2024-07-15 16:18:55.213055] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.831 [2024-07-15 16:18:55.213070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.831 #44 NEW cov: 12232 ft: 15584 corp: 29/1412b lim: 90 exec/s: 44 rss: 74Mb L: 49/79 MS: 1 ChangeBit- 00:08:09.831 [2024-07-15 16:18:55.263451] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.831 [2024-07-15 16:18:55.263478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.831 [2024-07-15 16:18:55.263532] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.831 [2024-07-15 16:18:55.263547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.831 [2024-07-15 16:18:55.263599] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:09.831 [2024-07-15 16:18:55.263616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.831 [2024-07-15 16:18:55.263672] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:09.831 [2024-07-15 16:18:55.263688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:09.831 #45 NEW cov: 12232 ft: 15603 corp: 30/1497b lim: 90 exec/s: 45 rss: 74Mb L: 85/85 MS: 1 CrossOver- 00:08:09.831 [2024-07-15 16:18:55.313394] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.831 [2024-07-15 16:18:55.313421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.831 [2024-07-15 16:18:55.313458] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.831 [2024-07-15 16:18:55.313474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.831 [2024-07-15 16:18:55.313531] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:09.831 [2024-07-15 16:18:55.313548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.831 #46 NEW cov: 12232 ft: 15628 corp: 31/1552b lim: 90 exec/s: 46 rss: 74Mb L: 55/85 MS: 1 ShuffleBytes- 00:08:09.831 [2024-07-15 16:18:55.363537] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.831 [2024-07-15 16:18:55.363564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.831 [2024-07-15 16:18:55.363612] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.831 [2024-07-15 16:18:55.363627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.831 [2024-07-15 16:18:55.363682] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:09.831 [2024-07-15 16:18:55.363697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.831 #47 NEW cov: 12232 ft: 15638 corp: 32/1609b lim: 90 exec/s: 47 rss: 74Mb L: 57/85 MS: 1 InsertRepeatedBytes- 00:08:09.831 [2024-07-15 16:18:55.403630] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.831 [2024-07-15 16:18:55.403656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.831 [2024-07-15 16:18:55.403697] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.831 [2024-07-15 16:18:55.403712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.831 [2024-07-15 16:18:55.403770] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:09.831 [2024-07-15 16:18:55.403786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.090 #48 NEW cov: 12232 ft: 15653 corp: 33/1673b lim: 90 exec/s: 48 rss: 74Mb L: 64/85 MS: 1 PersAutoDict- DE: "\224\030\024\\9\177\000\000"- 00:08:10.090 [2024-07-15 16:18:55.453971] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:10.090 [2024-07-15 16:18:55.453998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.090 [2024-07-15 16:18:55.454048] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:10.090 [2024-07-15 16:18:55.454064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.090 [2024-07-15 16:18:55.454118] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:10.090 [2024-07-15 16:18:55.454134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.090 [2024-07-15 16:18:55.454189] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:10.090 [2024-07-15 16:18:55.454206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:10.090 #49 NEW cov: 12232 ft: 15672 corp: 34/1758b lim: 90 exec/s: 49 rss: 74Mb L: 85/85 MS: 1 PersAutoDict- DE: "\224\030\024\\9\177\000\000"- 00:08:10.090 [2024-07-15 16:18:55.503921] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:10.090 [2024-07-15 16:18:55.503948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.090 [2024-07-15 16:18:55.503986] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:10.090 [2024-07-15 16:18:55.504003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.090 [2024-07-15 16:18:55.504058] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:10.090 [2024-07-15 16:18:55.504074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.090 #50 NEW cov: 12232 ft: 15680 corp: 35/1829b lim: 90 exec/s: 50 rss: 74Mb L: 71/85 MS: 1 InsertRepeatedBytes- 00:08:10.090 [2024-07-15 16:18:55.544221] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:10.090 [2024-07-15 16:18:55.544248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.090 [2024-07-15 16:18:55.544300] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:10.090 [2024-07-15 16:18:55.544317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.090 [2024-07-15 16:18:55.544372] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:10.090 [2024-07-15 16:18:55.544389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.090 [2024-07-15 16:18:55.544445] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:10.090 [2024-07-15 16:18:55.544464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:10.090 #55 NEW cov: 12232 ft: 15720 corp: 36/1904b lim: 90 exec/s: 55 rss: 74Mb L: 75/85 MS: 5 CrossOver-ChangeByte-InsertByte-EraseBytes-InsertRepeatedBytes- 00:08:10.090 [2024-07-15 16:18:55.584361] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:10.090 [2024-07-15 16:18:55.584387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.090 [2024-07-15 16:18:55.584441] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:10.090 [2024-07-15 16:18:55.584458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.090 [2024-07-15 16:18:55.584511] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:10.090 [2024-07-15 16:18:55.584532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.090 [2024-07-15 16:18:55.584585] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:10.090 [2024-07-15 16:18:55.584602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:10.091 #56 NEW cov: 12232 ft: 15723 corp: 37/1989b lim: 90 exec/s: 56 rss: 74Mb L: 85/85 MS: 1 CrossOver- 00:08:10.091 [2024-07-15 16:18:55.634349] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:10.091 [2024-07-15 16:18:55.634377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.091 [2024-07-15 16:18:55.634424] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:10.091 [2024-07-15 16:18:55.634440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.091 [2024-07-15 16:18:55.634497] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:10.091 [2024-07-15 16:18:55.634512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.091 #57 NEW cov: 12232 ft: 15732 corp: 38/2048b lim: 90 exec/s: 57 rss: 74Mb L: 59/85 MS: 1 CrossOver- 00:08:10.350 [2024-07-15 16:18:55.674472] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:10.350 [2024-07-15 16:18:55.674500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.350 [2024-07-15 16:18:55.674548] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:10.350 [2024-07-15 16:18:55.674565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.350 [2024-07-15 16:18:55.674621] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:10.350 [2024-07-15 16:18:55.674636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.350 #58 NEW cov: 12232 ft: 15755 corp: 39/2103b lim: 90 exec/s: 29 rss: 75Mb L: 55/85 MS: 1 ShuffleBytes- 00:08:10.350 #58 DONE cov: 12232 ft: 15755 corp: 39/2103b lim: 90 exec/s: 29 rss: 75Mb 00:08:10.350 ###### Recommended dictionary. ###### 00:08:10.350 "\224\030\024\\9\177\000\000" # Uses: 2 00:08:10.350 ###### End of recommended dictionary. ###### 00:08:10.350 Done 58 runs in 2 second(s) 00:08:10.350 16:18:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_20.conf /var/tmp/suppress_nvmf_fuzz 00:08:10.350 16:18:55 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:10.350 16:18:55 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:10.350 16:18:55 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 21 1 0x1 00:08:10.350 16:18:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=21 00:08:10.350 16:18:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:10.350 16:18:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:10.350 16:18:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:08:10.350 16:18:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_21.conf 00:08:10.350 16:18:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:10.350 16:18:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:10.350 16:18:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 21 00:08:10.350 16:18:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4421 00:08:10.350 16:18:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:08:10.350 16:18:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' 00:08:10.350 16:18:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4421"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:10.350 16:18:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:10.350 16:18:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:10.350 16:18:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' -c /tmp/fuzz_json_21.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 -Z 21 00:08:10.350 [2024-07-15 16:18:55.898312] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:08:10.350 [2024-07-15 16:18:55.898386] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1522214 ] 00:08:10.609 EAL: No free 2048 kB hugepages reported on node 1 00:08:10.609 [2024-07-15 16:18:56.096257] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.609 [2024-07-15 16:18:56.168419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.868 [2024-07-15 16:18:56.228277] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:10.868 [2024-07-15 16:18:56.244469] tcp.c: 993:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 *** 00:08:10.868 INFO: Running with entropic power schedule (0xFF, 100). 00:08:10.868 INFO: Seed: 2053934558 00:08:10.868 INFO: Loaded 1 modules (357850 inline 8-bit counters): 357850 [0x29ab30c, 0x2a028e6), 00:08:10.868 INFO: Loaded 1 PC tables (357850 PCs): 357850 [0x2a028e8,0x2f78688), 00:08:10.868 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:08:10.868 INFO: A corpus is not provided, starting from an empty corpus 00:08:10.868 #2 INITED exec/s: 0 rss: 65Mb 00:08:10.868 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:10.868 This may also happen if the target rejected all inputs we tried so far 00:08:10.868 [2024-07-15 16:18:56.310014] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:10.868 [2024-07-15 16:18:56.310044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.868 [2024-07-15 16:18:56.310086] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:10.868 [2024-07-15 16:18:56.310103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.868 [2024-07-15 16:18:56.310157] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:10.868 [2024-07-15 16:18:56.310172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.126 NEW_FUNC[1/699]: 0x4a9350 in fuzz_nvm_reservation_release_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:623 00:08:11.126 NEW_FUNC[2/699]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:11.126 #20 NEW cov: 11963 ft: 11964 corp: 2/31b lim: 50 exec/s: 0 rss: 72Mb L: 30/30 MS: 3 ChangeBinInt-InsertByte-InsertRepeatedBytes- 00:08:11.126 [2024-07-15 16:18:56.651036] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:11.126 [2024-07-15 16:18:56.651082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.126 [2024-07-15 16:18:56.651151] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:11.126 [2024-07-15 16:18:56.651169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.126 [2024-07-15 16:18:56.651225] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:11.126 [2024-07-15 16:18:56.651243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.126 [2024-07-15 16:18:56.651299] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:11.126 [2024-07-15 16:18:56.651316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:11.126 #21 NEW cov: 12093 ft: 12868 corp: 3/75b lim: 50 exec/s: 0 rss: 72Mb L: 44/44 MS: 1 InsertRepeatedBytes- 00:08:11.385 [2024-07-15 16:18:56.711063] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:11.385 [2024-07-15 16:18:56.711094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.385 [2024-07-15 16:18:56.711151] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:11.385 [2024-07-15 16:18:56.711166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.385 [2024-07-15 16:18:56.711220] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:11.385 [2024-07-15 16:18:56.711236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.385 [2024-07-15 16:18:56.711287] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:11.385 [2024-07-15 16:18:56.711301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:11.385 #25 NEW cov: 12099 ft: 13140 corp: 4/122b lim: 50 exec/s: 0 rss: 72Mb L: 47/47 MS: 4 ChangeBit-ChangeBinInt-InsertByte-InsertRepeatedBytes- 00:08:11.385 [2024-07-15 16:18:56.751212] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:11.385 [2024-07-15 16:18:56.751239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.385 [2024-07-15 16:18:56.751301] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:11.385 [2024-07-15 16:18:56.751318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.385 [2024-07-15 16:18:56.751371] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:11.385 [2024-07-15 16:18:56.751385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.385 [2024-07-15 16:18:56.751440] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:11.385 [2024-07-15 16:18:56.751457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:11.385 #26 NEW cov: 12184 ft: 13366 corp: 5/169b lim: 50 exec/s: 0 rss: 72Mb L: 47/47 MS: 1 ChangeByte- 00:08:11.385 [2024-07-15 16:18:56.801167] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:11.385 [2024-07-15 16:18:56.801193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.385 [2024-07-15 16:18:56.801240] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:11.385 [2024-07-15 16:18:56.801256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.385 [2024-07-15 16:18:56.801307] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:11.385 [2024-07-15 16:18:56.801321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.385 #27 NEW cov: 12184 ft: 13435 corp: 6/204b lim: 50 exec/s: 0 rss: 72Mb L: 35/47 MS: 1 EraseBytes- 00:08:11.385 [2024-07-15 16:18:56.851318] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:11.385 [2024-07-15 16:18:56.851345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.385 [2024-07-15 16:18:56.851391] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:11.385 [2024-07-15 16:18:56.851407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.385 [2024-07-15 16:18:56.851459] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:11.385 [2024-07-15 16:18:56.851474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.385 #28 NEW cov: 12184 ft: 13492 corp: 7/234b lim: 50 exec/s: 0 rss: 72Mb L: 30/47 MS: 1 CrossOver- 00:08:11.385 [2024-07-15 16:18:56.891484] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:11.385 [2024-07-15 16:18:56.891513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.385 [2024-07-15 16:18:56.891569] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:11.385 [2024-07-15 16:18:56.891586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.385 [2024-07-15 16:18:56.891639] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:11.385 [2024-07-15 16:18:56.891655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.385 #29 NEW cov: 12184 ft: 13574 corp: 8/264b lim: 50 exec/s: 0 rss: 72Mb L: 30/47 MS: 1 ChangeBit- 00:08:11.385 [2024-07-15 16:18:56.941621] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:11.385 [2024-07-15 16:18:56.941648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.385 [2024-07-15 16:18:56.941694] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:11.385 [2024-07-15 16:18:56.941709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.385 [2024-07-15 16:18:56.941762] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:11.385 [2024-07-15 16:18:56.941778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.385 #30 NEW cov: 12184 ft: 13663 corp: 9/294b lim: 50 exec/s: 0 rss: 73Mb L: 30/47 MS: 1 ChangeBinInt- 00:08:11.645 [2024-07-15 16:18:56.981661] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:11.645 [2024-07-15 16:18:56.981690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.645 [2024-07-15 16:18:56.981729] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:11.645 [2024-07-15 16:18:56.981744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.645 [2024-07-15 16:18:56.981794] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:11.645 [2024-07-15 16:18:56.981810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.645 #31 NEW cov: 12184 ft: 13711 corp: 10/329b lim: 50 exec/s: 0 rss: 73Mb L: 35/47 MS: 1 EraseBytes- 00:08:11.645 [2024-07-15 16:18:57.031799] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:11.645 [2024-07-15 16:18:57.031828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.645 [2024-07-15 16:18:57.031870] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:11.645 [2024-07-15 16:18:57.031887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.645 [2024-07-15 16:18:57.031939] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:11.645 [2024-07-15 16:18:57.031956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.645 #32 NEW cov: 12184 ft: 13801 corp: 11/359b lim: 50 exec/s: 0 rss: 73Mb L: 30/47 MS: 1 EraseBytes- 00:08:11.645 [2024-07-15 16:18:57.071932] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:11.645 [2024-07-15 16:18:57.071960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.645 [2024-07-15 16:18:57.071997] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:11.645 [2024-07-15 16:18:57.072012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.645 [2024-07-15 16:18:57.072065] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:11.645 [2024-07-15 16:18:57.072078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.645 #33 NEW cov: 12184 ft: 13831 corp: 12/389b lim: 50 exec/s: 0 rss: 73Mb L: 30/47 MS: 1 ChangeBinInt- 00:08:11.645 [2024-07-15 16:18:57.122053] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:11.645 [2024-07-15 16:18:57.122081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.645 [2024-07-15 16:18:57.122119] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:11.645 [2024-07-15 16:18:57.122140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.645 [2024-07-15 16:18:57.122193] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:11.645 [2024-07-15 16:18:57.122207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.645 #34 NEW cov: 12184 ft: 13898 corp: 13/419b lim: 50 exec/s: 0 rss: 73Mb L: 30/47 MS: 1 ShuffleBytes- 00:08:11.645 [2024-07-15 16:18:57.172163] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:11.645 [2024-07-15 16:18:57.172190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.645 [2024-07-15 16:18:57.172236] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:11.645 [2024-07-15 16:18:57.172251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.645 [2024-07-15 16:18:57.172301] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:11.645 [2024-07-15 16:18:57.172315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.645 NEW_FUNC[1/1]: 0x1a7eaf0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:11.645 #35 NEW cov: 12207 ft: 13938 corp: 14/454b lim: 50 exec/s: 0 rss: 73Mb L: 35/47 MS: 1 ChangeBit- 00:08:11.645 [2024-07-15 16:18:57.222335] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:11.645 [2024-07-15 16:18:57.222363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.645 [2024-07-15 16:18:57.222403] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:11.645 [2024-07-15 16:18:57.222423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.645 [2024-07-15 16:18:57.222476] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:11.645 [2024-07-15 16:18:57.222491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.904 #36 NEW cov: 12207 ft: 13949 corp: 15/484b lim: 50 exec/s: 0 rss: 73Mb L: 30/47 MS: 1 CopyPart- 00:08:11.904 [2024-07-15 16:18:57.272458] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:11.904 [2024-07-15 16:18:57.272486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.904 [2024-07-15 16:18:57.272522] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:11.904 [2024-07-15 16:18:57.272546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.904 [2024-07-15 16:18:57.272599] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:11.904 [2024-07-15 16:18:57.272615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.904 #37 NEW cov: 12207 ft: 13996 corp: 16/514b lim: 50 exec/s: 37 rss: 73Mb L: 30/47 MS: 1 ChangeByte- 00:08:11.904 [2024-07-15 16:18:57.322621] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:11.904 [2024-07-15 16:18:57.322649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.904 [2024-07-15 16:18:57.322688] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:11.904 [2024-07-15 16:18:57.322703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.904 [2024-07-15 16:18:57.322756] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:11.904 [2024-07-15 16:18:57.322772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.904 #38 NEW cov: 12207 ft: 14008 corp: 17/544b lim: 50 exec/s: 38 rss: 73Mb L: 30/47 MS: 1 ChangeBinInt- 00:08:11.904 [2024-07-15 16:18:57.362882] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:11.904 [2024-07-15 16:18:57.362910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.904 [2024-07-15 16:18:57.362957] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:11.904 [2024-07-15 16:18:57.362972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.904 [2024-07-15 16:18:57.363024] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:11.904 [2024-07-15 16:18:57.363039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.904 [2024-07-15 16:18:57.363091] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:11.904 [2024-07-15 16:18:57.363108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:11.904 #39 NEW cov: 12207 ft: 14076 corp: 18/591b lim: 50 exec/s: 39 rss: 73Mb L: 47/47 MS: 1 CrossOver- 00:08:11.904 [2024-07-15 16:18:57.413026] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:11.904 [2024-07-15 16:18:57.413052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.904 [2024-07-15 16:18:57.413102] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:11.904 [2024-07-15 16:18:57.413117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.904 [2024-07-15 16:18:57.413168] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:11.904 [2024-07-15 16:18:57.413184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.904 [2024-07-15 16:18:57.413237] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:11.904 [2024-07-15 16:18:57.413252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:11.904 #40 NEW cov: 12207 ft: 14083 corp: 19/639b lim: 50 exec/s: 40 rss: 73Mb L: 48/48 MS: 1 CrossOver- 00:08:11.904 [2024-07-15 16:18:57.453132] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:11.904 [2024-07-15 16:18:57.453158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.904 [2024-07-15 16:18:57.453207] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:11.905 [2024-07-15 16:18:57.453223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.905 [2024-07-15 16:18:57.453272] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:11.905 [2024-07-15 16:18:57.453288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.905 [2024-07-15 16:18:57.453340] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:11.905 [2024-07-15 16:18:57.453356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:12.186 #41 NEW cov: 12207 ft: 14111 corp: 20/681b lim: 50 exec/s: 41 rss: 73Mb L: 42/48 MS: 1 InsertRepeatedBytes- 00:08:12.186 [2024-07-15 16:18:57.503087] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:12.186 [2024-07-15 16:18:57.503117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.186 [2024-07-15 16:18:57.503153] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:12.186 [2024-07-15 16:18:57.503168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.186 [2024-07-15 16:18:57.503220] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:12.186 [2024-07-15 16:18:57.503234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.186 #42 NEW cov: 12207 ft: 14119 corp: 21/711b lim: 50 exec/s: 42 rss: 73Mb L: 30/48 MS: 1 ChangeByte- 00:08:12.186 [2024-07-15 16:18:57.553264] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:12.186 [2024-07-15 16:18:57.553290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.186 [2024-07-15 16:18:57.553338] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:12.186 [2024-07-15 16:18:57.553353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.186 [2024-07-15 16:18:57.553403] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:12.186 [2024-07-15 16:18:57.553418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.186 #43 NEW cov: 12207 ft: 14133 corp: 22/741b lim: 50 exec/s: 43 rss: 73Mb L: 30/48 MS: 1 CrossOver- 00:08:12.186 [2024-07-15 16:18:57.593520] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:12.186 [2024-07-15 16:18:57.593550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.186 [2024-07-15 16:18:57.593606] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:12.186 [2024-07-15 16:18:57.593622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.186 [2024-07-15 16:18:57.593675] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:12.186 [2024-07-15 16:18:57.593690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.186 [2024-07-15 16:18:57.593742] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:12.186 [2024-07-15 16:18:57.593757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:12.186 #44 NEW cov: 12207 ft: 14144 corp: 23/783b lim: 50 exec/s: 44 rss: 73Mb L: 42/48 MS: 1 EraseBytes- 00:08:12.186 [2024-07-15 16:18:57.633627] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:12.186 [2024-07-15 16:18:57.633654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.186 [2024-07-15 16:18:57.633704] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:12.186 [2024-07-15 16:18:57.633720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.186 [2024-07-15 16:18:57.633769] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:12.186 [2024-07-15 16:18:57.633785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.186 [2024-07-15 16:18:57.633839] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:12.186 [2024-07-15 16:18:57.633855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:12.186 #45 NEW cov: 12207 ft: 14146 corp: 24/830b lim: 50 exec/s: 45 rss: 73Mb L: 47/48 MS: 1 InsertRepeatedBytes- 00:08:12.186 [2024-07-15 16:18:57.673742] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:12.186 [2024-07-15 16:18:57.673768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.186 [2024-07-15 16:18:57.673820] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:12.186 [2024-07-15 16:18:57.673836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.186 [2024-07-15 16:18:57.673886] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:12.186 [2024-07-15 16:18:57.673901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.186 [2024-07-15 16:18:57.673953] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:12.186 [2024-07-15 16:18:57.673968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:12.186 #46 NEW cov: 12207 ft: 14245 corp: 25/877b lim: 50 exec/s: 46 rss: 73Mb L: 47/48 MS: 1 ChangeByte- 00:08:12.186 [2024-07-15 16:18:57.723790] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:12.186 [2024-07-15 16:18:57.723817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.186 [2024-07-15 16:18:57.723862] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:12.186 [2024-07-15 16:18:57.723877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.186 [2024-07-15 16:18:57.723931] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:12.186 [2024-07-15 16:18:57.723945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.186 #47 NEW cov: 12207 ft: 14313 corp: 26/907b lim: 50 exec/s: 47 rss: 74Mb L: 30/48 MS: 1 ChangeByte- 00:08:12.444 [2024-07-15 16:18:57.774182] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:12.444 [2024-07-15 16:18:57.774209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.444 [2024-07-15 16:18:57.774264] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:12.444 [2024-07-15 16:18:57.774279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.444 [2024-07-15 16:18:57.774330] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:12.444 [2024-07-15 16:18:57.774346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.444 [2024-07-15 16:18:57.774396] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:12.444 [2024-07-15 16:18:57.774411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:12.445 [2024-07-15 16:18:57.774462] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:08:12.445 [2024-07-15 16:18:57.774494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:12.445 #48 NEW cov: 12207 ft: 14389 corp: 27/957b lim: 50 exec/s: 48 rss: 74Mb L: 50/50 MS: 1 InsertRepeatedBytes- 00:08:12.445 [2024-07-15 16:18:57.824169] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:12.445 [2024-07-15 16:18:57.824195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.445 [2024-07-15 16:18:57.824244] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:12.445 [2024-07-15 16:18:57.824260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.445 [2024-07-15 16:18:57.824311] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:12.445 [2024-07-15 16:18:57.824326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.445 [2024-07-15 16:18:57.824377] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:12.445 [2024-07-15 16:18:57.824393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:12.445 #49 NEW cov: 12207 ft: 14416 corp: 28/1006b lim: 50 exec/s: 49 rss: 74Mb L: 49/50 MS: 1 InsertRepeatedBytes- 00:08:12.445 [2024-07-15 16:18:57.874330] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:12.445 [2024-07-15 16:18:57.874356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.445 [2024-07-15 16:18:57.874409] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:12.445 [2024-07-15 16:18:57.874426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.445 [2024-07-15 16:18:57.874475] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:12.445 [2024-07-15 16:18:57.874491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.445 [2024-07-15 16:18:57.874544] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:12.445 [2024-07-15 16:18:57.874559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:12.445 #50 NEW cov: 12207 ft: 14433 corp: 29/1047b lim: 50 exec/s: 50 rss: 74Mb L: 41/50 MS: 1 CrossOver- 00:08:12.445 [2024-07-15 16:18:57.914276] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:12.445 [2024-07-15 16:18:57.914302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.445 [2024-07-15 16:18:57.914350] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:12.445 [2024-07-15 16:18:57.914366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.445 [2024-07-15 16:18:57.914415] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:12.445 [2024-07-15 16:18:57.914430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.445 #51 NEW cov: 12207 ft: 14492 corp: 30/1082b lim: 50 exec/s: 51 rss: 74Mb L: 35/50 MS: 1 ChangeBit- 00:08:12.445 [2024-07-15 16:18:57.954687] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:12.445 [2024-07-15 16:18:57.954712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.445 [2024-07-15 16:18:57.954764] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:12.445 [2024-07-15 16:18:57.954782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.445 [2024-07-15 16:18:57.954834] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:12.445 [2024-07-15 16:18:57.954849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.445 [2024-07-15 16:18:57.954901] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:12.445 [2024-07-15 16:18:57.954915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:12.445 [2024-07-15 16:18:57.954965] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:08:12.445 [2024-07-15 16:18:57.954980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:12.445 #52 NEW cov: 12207 ft: 14503 corp: 31/1132b lim: 50 exec/s: 52 rss: 74Mb L: 50/50 MS: 1 ShuffleBytes- 00:08:12.445 [2024-07-15 16:18:58.004697] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:12.445 [2024-07-15 16:18:58.004724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.445 [2024-07-15 16:18:58.004773] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:12.445 [2024-07-15 16:18:58.004789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.445 [2024-07-15 16:18:58.004841] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:12.445 [2024-07-15 16:18:58.004856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.445 [2024-07-15 16:18:58.004907] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:12.445 [2024-07-15 16:18:58.004923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:12.704 #53 NEW cov: 12207 ft: 14510 corp: 32/1179b lim: 50 exec/s: 53 rss: 74Mb L: 47/50 MS: 1 ChangeBinInt- 00:08:12.704 [2024-07-15 16:18:58.044821] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:12.704 [2024-07-15 16:18:58.044849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.704 [2024-07-15 16:18:58.044899] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:12.704 [2024-07-15 16:18:58.044914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.704 [2024-07-15 16:18:58.044965] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:12.704 [2024-07-15 16:18:58.044980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.704 [2024-07-15 16:18:58.045030] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:12.704 [2024-07-15 16:18:58.045046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:12.704 #54 NEW cov: 12207 ft: 14519 corp: 33/1219b lim: 50 exec/s: 54 rss: 74Mb L: 40/50 MS: 1 CrossOver- 00:08:12.704 [2024-07-15 16:18:58.094509] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:12.704 [2024-07-15 16:18:58.094546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.704 #55 NEW cov: 12207 ft: 15354 corp: 34/1236b lim: 50 exec/s: 55 rss: 74Mb L: 17/50 MS: 1 EraseBytes- 00:08:12.704 [2024-07-15 16:18:58.135045] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:12.704 [2024-07-15 16:18:58.135070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.704 [2024-07-15 16:18:58.135121] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:12.704 [2024-07-15 16:18:58.135136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.704 [2024-07-15 16:18:58.135187] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:12.704 [2024-07-15 16:18:58.135203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.704 [2024-07-15 16:18:58.135255] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:12.704 [2024-07-15 16:18:58.135271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:12.704 #56 NEW cov: 12207 ft: 15366 corp: 35/1279b lim: 50 exec/s: 56 rss: 74Mb L: 43/50 MS: 1 InsertRepeatedBytes- 00:08:12.704 [2024-07-15 16:18:58.175213] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:12.704 [2024-07-15 16:18:58.175240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.704 [2024-07-15 16:18:58.175286] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:12.704 [2024-07-15 16:18:58.175301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.704 [2024-07-15 16:18:58.175350] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:12.704 [2024-07-15 16:18:58.175366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.704 [2024-07-15 16:18:58.175417] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:12.704 [2024-07-15 16:18:58.175431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:12.704 #57 NEW cov: 12207 ft: 15378 corp: 36/1324b lim: 50 exec/s: 57 rss: 74Mb L: 45/50 MS: 1 CMP- DE: "\002\000"- 00:08:12.704 [2024-07-15 16:18:58.225321] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:12.704 [2024-07-15 16:18:58.225347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.704 [2024-07-15 16:18:58.225401] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:12.704 [2024-07-15 16:18:58.225416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.704 [2024-07-15 16:18:58.225464] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:12.704 [2024-07-15 16:18:58.225479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.704 [2024-07-15 16:18:58.225534] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:12.704 [2024-07-15 16:18:58.225549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:12.704 #58 NEW cov: 12207 ft: 15380 corp: 37/1366b lim: 50 exec/s: 58 rss: 74Mb L: 42/50 MS: 1 CMP- DE: "\001\000\000\000"- 00:08:12.704 [2024-07-15 16:18:58.275470] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:12.704 [2024-07-15 16:18:58.275502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.704 [2024-07-15 16:18:58.275547] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:12.704 [2024-07-15 16:18:58.275563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.704 [2024-07-15 16:18:58.275631] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:12.704 [2024-07-15 16:18:58.275647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.704 [2024-07-15 16:18:58.275713] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:12.704 [2024-07-15 16:18:58.275727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:12.963 #59 NEW cov: 12207 ft: 15385 corp: 38/1412b lim: 50 exec/s: 29 rss: 74Mb L: 46/50 MS: 1 PersAutoDict- DE: "\001\000\000\000"- 00:08:12.963 #59 DONE cov: 12207 ft: 15385 corp: 38/1412b lim: 50 exec/s: 29 rss: 74Mb 00:08:12.963 ###### Recommended dictionary. ###### 00:08:12.963 "\002\000" # Uses: 0 00:08:12.963 "\001\000\000\000" # Uses: 1 00:08:12.963 ###### End of recommended dictionary. ###### 00:08:12.963 Done 59 runs in 2 second(s) 00:08:12.963 16:18:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_21.conf /var/tmp/suppress_nvmf_fuzz 00:08:12.963 16:18:58 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:12.963 16:18:58 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:12.963 16:18:58 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 22 1 0x1 00:08:12.963 16:18:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=22 00:08:12.964 16:18:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:12.964 16:18:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:12.964 16:18:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:08:12.964 16:18:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_22.conf 00:08:12.964 16:18:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:12.964 16:18:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:12.964 16:18:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 22 00:08:12.964 16:18:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4422 00:08:12.964 16:18:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:08:12.964 16:18:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' 00:08:12.964 16:18:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4422"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:12.964 16:18:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:12.964 16:18:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:12.964 16:18:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' -c /tmp/fuzz_json_22.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 -Z 22 00:08:12.964 [2024-07-15 16:18:58.494047] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:08:12.964 [2024-07-15 16:18:58.494120] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1522526 ] 00:08:12.964 EAL: No free 2048 kB hugepages reported on node 1 00:08:13.221 [2024-07-15 16:18:58.691239] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.221 [2024-07-15 16:18:58.761982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.478 [2024-07-15 16:18:58.821642] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:13.478 [2024-07-15 16:18:58.837828] tcp.c: 993:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4422 *** 00:08:13.478 INFO: Running with entropic power schedule (0xFF, 100). 00:08:13.478 INFO: Seed: 350238791 00:08:13.478 INFO: Loaded 1 modules (357850 inline 8-bit counters): 357850 [0x29ab30c, 0x2a028e6), 00:08:13.478 INFO: Loaded 1 PC tables (357850 PCs): 357850 [0x2a028e8,0x2f78688), 00:08:13.478 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:08:13.478 INFO: A corpus is not provided, starting from an empty corpus 00:08:13.478 #2 INITED exec/s: 0 rss: 65Mb 00:08:13.478 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:13.478 This may also happen if the target rejected all inputs we tried so far 00:08:13.478 [2024-07-15 16:18:58.886051] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:13.478 [2024-07-15 16:18:58.886083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.478 [2024-07-15 16:18:58.886125] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:13.478 [2024-07-15 16:18:58.886140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.478 [2024-07-15 16:18:58.886192] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:13.478 [2024-07-15 16:18:58.886207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.735 NEW_FUNC[1/699]: 0x4ab610 in fuzz_nvm_reservation_register_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:644 00:08:13.735 NEW_FUNC[2/699]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:13.735 #5 NEW cov: 11989 ft: 11988 corp: 2/65b lim: 85 exec/s: 0 rss: 71Mb L: 64/64 MS: 3 InsertByte-ShuffleBytes-InsertRepeatedBytes- 00:08:13.735 [2024-07-15 16:18:59.226853] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:13.735 [2024-07-15 16:18:59.226906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.735 [2024-07-15 16:18:59.226975] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:13.735 [2024-07-15 16:18:59.226996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.735 #13 NEW cov: 12119 ft: 13027 corp: 3/108b lim: 85 exec/s: 0 rss: 72Mb L: 43/64 MS: 3 CopyPart-InsertByte-CrossOver- 00:08:13.735 [2024-07-15 16:18:59.266983] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:13.735 [2024-07-15 16:18:59.267015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.735 [2024-07-15 16:18:59.267052] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:13.735 [2024-07-15 16:18:59.267067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.735 [2024-07-15 16:18:59.267120] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:13.735 [2024-07-15 16:18:59.267135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.735 #14 NEW cov: 12125 ft: 13174 corp: 4/165b lim: 85 exec/s: 0 rss: 72Mb L: 57/64 MS: 1 EraseBytes- 00:08:13.993 [2024-07-15 16:18:59.317262] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:13.993 [2024-07-15 16:18:59.317291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.993 [2024-07-15 16:18:59.317335] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:13.993 [2024-07-15 16:18:59.317351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.993 [2024-07-15 16:18:59.317402] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:13.993 [2024-07-15 16:18:59.317418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.993 [2024-07-15 16:18:59.317468] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:13.993 [2024-07-15 16:18:59.317484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:13.993 #15 NEW cov: 12210 ft: 13733 corp: 5/248b lim: 85 exec/s: 0 rss: 72Mb L: 83/83 MS: 1 InsertRepeatedBytes- 00:08:13.993 [2024-07-15 16:18:59.357251] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:13.993 [2024-07-15 16:18:59.357277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.993 [2024-07-15 16:18:59.357314] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:13.993 [2024-07-15 16:18:59.357329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.993 [2024-07-15 16:18:59.357381] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:13.993 [2024-07-15 16:18:59.357397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.993 #16 NEW cov: 12210 ft: 13863 corp: 6/311b lim: 85 exec/s: 0 rss: 72Mb L: 63/83 MS: 1 InsertRepeatedBytes- 00:08:13.993 [2024-07-15 16:18:59.407104] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:13.993 [2024-07-15 16:18:59.407130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.993 #18 NEW cov: 12210 ft: 14742 corp: 7/330b lim: 85 exec/s: 0 rss: 72Mb L: 19/83 MS: 2 InsertByte-CrossOver- 00:08:13.993 [2024-07-15 16:18:59.447196] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:13.993 [2024-07-15 16:18:59.447223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.993 #19 NEW cov: 12210 ft: 14848 corp: 8/350b lim: 85 exec/s: 0 rss: 72Mb L: 20/83 MS: 1 InsertByte- 00:08:13.993 [2024-07-15 16:18:59.497793] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:13.993 [2024-07-15 16:18:59.497819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.993 [2024-07-15 16:18:59.497871] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:13.993 [2024-07-15 16:18:59.497887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.993 [2024-07-15 16:18:59.497938] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:13.993 [2024-07-15 16:18:59.497954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.993 [2024-07-15 16:18:59.498007] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:13.993 [2024-07-15 16:18:59.498026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:13.993 #20 NEW cov: 12210 ft: 14880 corp: 9/433b lim: 85 exec/s: 0 rss: 72Mb L: 83/83 MS: 1 ChangeBinInt- 00:08:13.993 [2024-07-15 16:18:59.547627] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:13.993 [2024-07-15 16:18:59.547655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.993 [2024-07-15 16:18:59.547692] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:13.993 [2024-07-15 16:18:59.547708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.251 #21 NEW cov: 12210 ft: 14919 corp: 10/476b lim: 85 exec/s: 0 rss: 72Mb L: 43/83 MS: 1 ChangeByte- 00:08:14.251 [2024-07-15 16:18:59.598046] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:14.251 [2024-07-15 16:18:59.598074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.251 [2024-07-15 16:18:59.598120] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:14.251 [2024-07-15 16:18:59.598136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.251 [2024-07-15 16:18:59.598187] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:14.251 [2024-07-15 16:18:59.598203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.251 [2024-07-15 16:18:59.598254] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:14.251 [2024-07-15 16:18:59.598270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:14.251 #22 NEW cov: 12210 ft: 14962 corp: 11/560b lim: 85 exec/s: 0 rss: 72Mb L: 84/84 MS: 1 CrossOver- 00:08:14.251 [2024-07-15 16:18:59.637858] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:14.251 [2024-07-15 16:18:59.637885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.251 [2024-07-15 16:18:59.637939] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:14.251 [2024-07-15 16:18:59.637954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.251 #23 NEW cov: 12210 ft: 15054 corp: 12/604b lim: 85 exec/s: 0 rss: 72Mb L: 44/84 MS: 1 InsertByte- 00:08:14.251 [2024-07-15 16:18:59.688327] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:14.251 [2024-07-15 16:18:59.688354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.251 [2024-07-15 16:18:59.688404] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:14.251 [2024-07-15 16:18:59.688419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.251 [2024-07-15 16:18:59.688470] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:14.251 [2024-07-15 16:18:59.688487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.251 [2024-07-15 16:18:59.688542] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:14.251 [2024-07-15 16:18:59.688558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:14.251 #24 NEW cov: 12210 ft: 15090 corp: 13/675b lim: 85 exec/s: 0 rss: 72Mb L: 71/84 MS: 1 EraseBytes- 00:08:14.251 [2024-07-15 16:18:59.738276] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:14.251 [2024-07-15 16:18:59.738303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.251 [2024-07-15 16:18:59.738340] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:14.251 [2024-07-15 16:18:59.738357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.251 [2024-07-15 16:18:59.738407] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:14.251 [2024-07-15 16:18:59.738423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.251 #25 NEW cov: 12210 ft: 15167 corp: 14/733b lim: 85 exec/s: 0 rss: 72Mb L: 58/84 MS: 1 InsertByte- 00:08:14.251 [2024-07-15 16:18:59.778387] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:14.251 [2024-07-15 16:18:59.778416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.251 [2024-07-15 16:18:59.778453] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:14.251 [2024-07-15 16:18:59.778469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.251 [2024-07-15 16:18:59.778521] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:14.251 [2024-07-15 16:18:59.778544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.251 NEW_FUNC[1/1]: 0x1a7eaf0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:14.251 #26 NEW cov: 12233 ft: 15234 corp: 15/791b lim: 85 exec/s: 0 rss: 72Mb L: 58/84 MS: 1 InsertByte- 00:08:14.251 [2024-07-15 16:18:59.818800] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:14.251 [2024-07-15 16:18:59.818827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.251 [2024-07-15 16:18:59.818878] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:14.251 [2024-07-15 16:18:59.818895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.252 [2024-07-15 16:18:59.818947] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:14.252 [2024-07-15 16:18:59.818963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.252 [2024-07-15 16:18:59.819015] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:14.252 [2024-07-15 16:18:59.819031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:14.252 [2024-07-15 16:18:59.819083] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:4 nsid:0 00:08:14.252 [2024-07-15 16:18:59.819098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:14.510 #27 NEW cov: 12233 ft: 15305 corp: 16/876b lim: 85 exec/s: 0 rss: 73Mb L: 85/85 MS: 1 InsertByte- 00:08:14.510 [2024-07-15 16:18:59.868647] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:14.510 [2024-07-15 16:18:59.868674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.510 [2024-07-15 16:18:59.868710] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:14.510 [2024-07-15 16:18:59.868726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.510 [2024-07-15 16:18:59.868778] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:14.510 [2024-07-15 16:18:59.868795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.510 #28 NEW cov: 12233 ft: 15367 corp: 17/934b lim: 85 exec/s: 28 rss: 73Mb L: 58/85 MS: 1 ChangeBinInt- 00:08:14.510 [2024-07-15 16:18:59.919096] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:14.510 [2024-07-15 16:18:59.919125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.510 [2024-07-15 16:18:59.919173] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:14.510 [2024-07-15 16:18:59.919189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.510 [2024-07-15 16:18:59.919245] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:14.510 [2024-07-15 16:18:59.919261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.510 [2024-07-15 16:18:59.919312] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:14.510 [2024-07-15 16:18:59.919329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:14.510 [2024-07-15 16:18:59.919381] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:4 nsid:0 00:08:14.510 [2024-07-15 16:18:59.919397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:14.510 #29 NEW cov: 12233 ft: 15385 corp: 18/1019b lim: 85 exec/s: 29 rss: 73Mb L: 85/85 MS: 1 CopyPart- 00:08:14.510 [2024-07-15 16:18:59.969045] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:14.510 [2024-07-15 16:18:59.969072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.510 [2024-07-15 16:18:59.969123] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:14.510 [2024-07-15 16:18:59.969139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.510 [2024-07-15 16:18:59.969190] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:14.510 [2024-07-15 16:18:59.969205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.510 [2024-07-15 16:18:59.969258] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:14.510 [2024-07-15 16:18:59.969274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:14.510 #30 NEW cov: 12233 ft: 15400 corp: 19/1098b lim: 85 exec/s: 30 rss: 73Mb L: 79/85 MS: 1 InsertRepeatedBytes- 00:08:14.510 [2024-07-15 16:19:00.009056] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:14.510 [2024-07-15 16:19:00.009083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.510 [2024-07-15 16:19:00.009131] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:14.510 [2024-07-15 16:19:00.009147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.510 [2024-07-15 16:19:00.009202] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:14.510 [2024-07-15 16:19:00.009218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.510 #31 NEW cov: 12233 ft: 15409 corp: 20/1162b lim: 85 exec/s: 31 rss: 73Mb L: 64/85 MS: 1 ChangeBinInt- 00:08:14.510 [2024-07-15 16:19:00.049358] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:14.510 [2024-07-15 16:19:00.049386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.510 [2024-07-15 16:19:00.049432] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:14.510 [2024-07-15 16:19:00.049448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.510 [2024-07-15 16:19:00.049502] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:14.510 [2024-07-15 16:19:00.049516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.510 [2024-07-15 16:19:00.049574] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:14.510 [2024-07-15 16:19:00.049590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:14.510 #32 NEW cov: 12233 ft: 15422 corp: 21/1241b lim: 85 exec/s: 32 rss: 73Mb L: 79/85 MS: 1 EraseBytes- 00:08:14.769 [2024-07-15 16:19:00.089434] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:14.769 [2024-07-15 16:19:00.089462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.769 [2024-07-15 16:19:00.089509] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:14.769 [2024-07-15 16:19:00.089525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.769 [2024-07-15 16:19:00.089582] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:14.769 [2024-07-15 16:19:00.089600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.769 [2024-07-15 16:19:00.089654] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:14.769 [2024-07-15 16:19:00.089671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:14.769 #33 NEW cov: 12233 ft: 15461 corp: 22/1324b lim: 85 exec/s: 33 rss: 73Mb L: 83/85 MS: 1 ChangeByte- 00:08:14.769 [2024-07-15 16:19:00.129684] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:14.769 [2024-07-15 16:19:00.129711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.769 [2024-07-15 16:19:00.129765] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:14.769 [2024-07-15 16:19:00.129782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.769 [2024-07-15 16:19:00.129833] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:14.769 [2024-07-15 16:19:00.129849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.769 [2024-07-15 16:19:00.129901] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:14.769 [2024-07-15 16:19:00.129920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:14.769 [2024-07-15 16:19:00.129973] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:4 nsid:0 00:08:14.769 [2024-07-15 16:19:00.129989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:14.769 #34 NEW cov: 12233 ft: 15475 corp: 23/1409b lim: 85 exec/s: 34 rss: 73Mb L: 85/85 MS: 1 CopyPart- 00:08:14.769 [2024-07-15 16:19:00.169516] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:14.769 [2024-07-15 16:19:00.169549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.769 [2024-07-15 16:19:00.169599] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:14.769 [2024-07-15 16:19:00.169615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.769 [2024-07-15 16:19:00.169668] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:14.769 [2024-07-15 16:19:00.169684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.769 #35 NEW cov: 12233 ft: 15533 corp: 24/1460b lim: 85 exec/s: 35 rss: 73Mb L: 51/85 MS: 1 EraseBytes- 00:08:14.769 [2024-07-15 16:19:00.219706] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:14.769 [2024-07-15 16:19:00.219734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.769 [2024-07-15 16:19:00.219776] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:14.769 [2024-07-15 16:19:00.219792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.769 [2024-07-15 16:19:00.219846] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:14.769 [2024-07-15 16:19:00.219863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.769 #36 NEW cov: 12233 ft: 15541 corp: 25/1518b lim: 85 exec/s: 36 rss: 73Mb L: 58/85 MS: 1 CopyPart- 00:08:14.769 [2024-07-15 16:19:00.269786] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:14.769 [2024-07-15 16:19:00.269812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.769 [2024-07-15 16:19:00.269849] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:14.769 [2024-07-15 16:19:00.269864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.769 [2024-07-15 16:19:00.269917] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:14.769 [2024-07-15 16:19:00.269932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.769 #37 NEW cov: 12233 ft: 15547 corp: 26/1576b lim: 85 exec/s: 37 rss: 73Mb L: 58/85 MS: 1 ChangeBinInt- 00:08:14.769 [2024-07-15 16:19:00.309922] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:14.769 [2024-07-15 16:19:00.309948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.769 [2024-07-15 16:19:00.309995] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:14.769 [2024-07-15 16:19:00.310010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.769 [2024-07-15 16:19:00.310066] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:14.769 [2024-07-15 16:19:00.310080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.769 #38 NEW cov: 12233 ft: 15560 corp: 27/1634b lim: 85 exec/s: 38 rss: 73Mb L: 58/85 MS: 1 ChangeByte- 00:08:15.028 [2024-07-15 16:19:00.349853] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:15.028 [2024-07-15 16:19:00.349880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.028 [2024-07-15 16:19:00.349933] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:15.028 [2024-07-15 16:19:00.349949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.028 #43 NEW cov: 12233 ft: 15581 corp: 28/1674b lim: 85 exec/s: 43 rss: 73Mb L: 40/85 MS: 5 InsertRepeatedBytes-EraseBytes-ChangeByte-ChangeByte-InsertRepeatedBytes- 00:08:15.028 [2024-07-15 16:19:00.390416] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:15.028 [2024-07-15 16:19:00.390442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.028 [2024-07-15 16:19:00.390496] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:15.028 [2024-07-15 16:19:00.390512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.028 [2024-07-15 16:19:00.390584] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:15.028 [2024-07-15 16:19:00.390601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.028 [2024-07-15 16:19:00.390665] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:15.028 [2024-07-15 16:19:00.390681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:15.028 [2024-07-15 16:19:00.390733] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:4 nsid:0 00:08:15.028 [2024-07-15 16:19:00.390748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:15.028 #44 NEW cov: 12233 ft: 15597 corp: 29/1759b lim: 85 exec/s: 44 rss: 73Mb L: 85/85 MS: 1 ChangeBinInt- 00:08:15.028 [2024-07-15 16:19:00.430543] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:15.028 [2024-07-15 16:19:00.430570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.028 [2024-07-15 16:19:00.430625] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:15.028 [2024-07-15 16:19:00.430641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.028 [2024-07-15 16:19:00.430691] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:15.028 [2024-07-15 16:19:00.430706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.028 [2024-07-15 16:19:00.430757] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:15.028 [2024-07-15 16:19:00.430772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:15.028 [2024-07-15 16:19:00.430826] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:4 nsid:0 00:08:15.028 [2024-07-15 16:19:00.430844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:15.028 #45 NEW cov: 12233 ft: 15612 corp: 30/1844b lim: 85 exec/s: 45 rss: 73Mb L: 85/85 MS: 1 ChangeASCIIInt- 00:08:15.028 [2024-07-15 16:19:00.470467] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:15.028 [2024-07-15 16:19:00.470493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.028 [2024-07-15 16:19:00.470544] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:15.028 [2024-07-15 16:19:00.470578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.028 [2024-07-15 16:19:00.470631] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:15.028 [2024-07-15 16:19:00.470647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.028 [2024-07-15 16:19:00.470702] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:15.028 [2024-07-15 16:19:00.470718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:15.028 #46 NEW cov: 12233 ft: 15655 corp: 31/1928b lim: 85 exec/s: 46 rss: 73Mb L: 84/85 MS: 1 CrossOver- 00:08:15.028 [2024-07-15 16:19:00.510473] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:15.028 [2024-07-15 16:19:00.510499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.028 [2024-07-15 16:19:00.510552] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:15.028 [2024-07-15 16:19:00.510569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.028 [2024-07-15 16:19:00.510622] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:15.028 [2024-07-15 16:19:00.510637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.028 #47 NEW cov: 12233 ft: 15661 corp: 32/1992b lim: 85 exec/s: 47 rss: 73Mb L: 64/85 MS: 1 ShuffleBytes- 00:08:15.028 [2024-07-15 16:19:00.560624] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:15.028 [2024-07-15 16:19:00.560651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.028 [2024-07-15 16:19:00.560693] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:15.028 [2024-07-15 16:19:00.560708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.029 [2024-07-15 16:19:00.560759] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:15.029 [2024-07-15 16:19:00.560776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.029 #48 NEW cov: 12233 ft: 15665 corp: 33/2050b lim: 85 exec/s: 48 rss: 74Mb L: 58/85 MS: 1 ChangeBit- 00:08:15.287 [2024-07-15 16:19:00.610923] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:15.287 [2024-07-15 16:19:00.610949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.287 [2024-07-15 16:19:00.610999] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:15.287 [2024-07-15 16:19:00.611015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.287 [2024-07-15 16:19:00.611070] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:15.287 [2024-07-15 16:19:00.611086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.287 [2024-07-15 16:19:00.611139] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:15.287 [2024-07-15 16:19:00.611155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:15.287 #49 NEW cov: 12233 ft: 15675 corp: 34/2133b lim: 85 exec/s: 49 rss: 74Mb L: 83/85 MS: 1 ChangeByte- 00:08:15.287 [2024-07-15 16:19:00.650851] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:15.287 [2024-07-15 16:19:00.650878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.287 [2024-07-15 16:19:00.650923] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:15.287 [2024-07-15 16:19:00.650938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.287 [2024-07-15 16:19:00.650991] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:15.287 [2024-07-15 16:19:00.651007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.287 #50 NEW cov: 12233 ft: 15676 corp: 35/2191b lim: 85 exec/s: 50 rss: 74Mb L: 58/85 MS: 1 ChangeBit- 00:08:15.287 [2024-07-15 16:19:00.701003] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:15.287 [2024-07-15 16:19:00.701030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.287 [2024-07-15 16:19:00.701074] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:15.287 [2024-07-15 16:19:00.701090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.287 [2024-07-15 16:19:00.701144] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:15.287 [2024-07-15 16:19:00.701160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.287 #51 NEW cov: 12233 ft: 15693 corp: 36/2257b lim: 85 exec/s: 51 rss: 74Mb L: 66/85 MS: 1 EraseBytes- 00:08:15.287 [2024-07-15 16:19:00.750873] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:15.287 [2024-07-15 16:19:00.750900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.287 #52 NEW cov: 12233 ft: 15703 corp: 37/2276b lim: 85 exec/s: 52 rss: 74Mb L: 19/85 MS: 1 ChangeBit- 00:08:15.287 [2024-07-15 16:19:00.791545] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:15.287 [2024-07-15 16:19:00.791571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.287 [2024-07-15 16:19:00.791625] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:15.287 [2024-07-15 16:19:00.791641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.287 [2024-07-15 16:19:00.791695] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:15.287 [2024-07-15 16:19:00.791711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.287 [2024-07-15 16:19:00.791763] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:15.287 [2024-07-15 16:19:00.791781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:15.287 [2024-07-15 16:19:00.791838] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:4 nsid:0 00:08:15.287 [2024-07-15 16:19:00.791853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:15.288 #53 NEW cov: 12233 ft: 15707 corp: 38/2361b lim: 85 exec/s: 53 rss: 74Mb L: 85/85 MS: 1 CopyPart- 00:08:15.288 [2024-07-15 16:19:00.841690] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:15.288 [2024-07-15 16:19:00.841716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.288 [2024-07-15 16:19:00.841771] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:15.288 [2024-07-15 16:19:00.841786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.288 [2024-07-15 16:19:00.841839] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:15.288 [2024-07-15 16:19:00.841856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.288 [2024-07-15 16:19:00.841906] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:15.288 [2024-07-15 16:19:00.841920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:15.288 [2024-07-15 16:19:00.841975] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:4 nsid:0 00:08:15.288 [2024-07-15 16:19:00.841990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:15.547 #54 NEW cov: 12233 ft: 15728 corp: 39/2446b lim: 85 exec/s: 27 rss: 74Mb L: 85/85 MS: 1 ShuffleBytes- 00:08:15.547 #54 DONE cov: 12233 ft: 15728 corp: 39/2446b lim: 85 exec/s: 27 rss: 74Mb 00:08:15.547 Done 54 runs in 2 second(s) 00:08:15.547 16:19:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_22.conf /var/tmp/suppress_nvmf_fuzz 00:08:15.547 16:19:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:15.547 16:19:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:15.547 16:19:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 23 1 0x1 00:08:15.547 16:19:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=23 00:08:15.547 16:19:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:15.547 16:19:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:15.547 16:19:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:08:15.547 16:19:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_23.conf 00:08:15.547 16:19:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:15.547 16:19:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:15.547 16:19:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 23 00:08:15.547 16:19:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4423 00:08:15.547 16:19:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:08:15.547 16:19:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' 00:08:15.547 16:19:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4423"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:15.547 16:19:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:15.547 16:19:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:15.547 16:19:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' -c /tmp/fuzz_json_23.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 -Z 23 00:08:15.547 [2024-07-15 16:19:01.061334] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:08:15.547 [2024-07-15 16:19:01.061419] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1522812 ] 00:08:15.547 EAL: No free 2048 kB hugepages reported on node 1 00:08:15.805 [2024-07-15 16:19:01.269259] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.805 [2024-07-15 16:19:01.344213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.063 [2024-07-15 16:19:01.403734] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:16.063 [2024-07-15 16:19:01.419920] tcp.c: 993:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4423 *** 00:08:16.063 INFO: Running with entropic power schedule (0xFF, 100). 00:08:16.063 INFO: Seed: 2932953305 00:08:16.063 INFO: Loaded 1 modules (357850 inline 8-bit counters): 357850 [0x29ab30c, 0x2a028e6), 00:08:16.063 INFO: Loaded 1 PC tables (357850 PCs): 357850 [0x2a028e8,0x2f78688), 00:08:16.063 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:08:16.063 INFO: A corpus is not provided, starting from an empty corpus 00:08:16.063 #2 INITED exec/s: 0 rss: 65Mb 00:08:16.063 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:16.064 This may also happen if the target rejected all inputs we tried so far 00:08:16.064 [2024-07-15 16:19:01.469196] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:16.064 [2024-07-15 16:19:01.469227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.064 [2024-07-15 16:19:01.469274] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:16.064 [2024-07-15 16:19:01.469290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.064 [2024-07-15 16:19:01.469344] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:16.064 [2024-07-15 16:19:01.469359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:16.064 [2024-07-15 16:19:01.469412] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:16.064 [2024-07-15 16:19:01.469427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:16.322 NEW_FUNC[1/698]: 0x4ae840 in fuzz_nvm_reservation_report_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:671 00:08:16.322 NEW_FUNC[2/698]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:16.322 #12 NEW cov: 11915 ft: 11909 corp: 2/25b lim: 25 exec/s: 0 rss: 72Mb L: 24/24 MS: 5 ChangeByte-ChangeBit-InsertByte-InsertByte-InsertRepeatedBytes- 00:08:16.322 [2024-07-15 16:19:01.809976] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:16.322 [2024-07-15 16:19:01.810025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.322 [2024-07-15 16:19:01.810088] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:16.322 [2024-07-15 16:19:01.810109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.322 [2024-07-15 16:19:01.810174] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:16.322 [2024-07-15 16:19:01.810195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:16.322 [2024-07-15 16:19:01.810256] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:16.322 [2024-07-15 16:19:01.810276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:16.322 #18 NEW cov: 12052 ft: 12659 corp: 3/49b lim: 25 exec/s: 0 rss: 73Mb L: 24/24 MS: 1 ShuffleBytes- 00:08:16.322 [2024-07-15 16:19:01.860000] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:16.322 [2024-07-15 16:19:01.860028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.322 [2024-07-15 16:19:01.860076] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:16.322 [2024-07-15 16:19:01.860092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.322 [2024-07-15 16:19:01.860144] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:16.322 [2024-07-15 16:19:01.860160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:16.322 [2024-07-15 16:19:01.860210] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:16.322 [2024-07-15 16:19:01.860226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:16.322 #19 NEW cov: 12058 ft: 12840 corp: 4/73b lim: 25 exec/s: 0 rss: 73Mb L: 24/24 MS: 1 ShuffleBytes- 00:08:16.322 [2024-07-15 16:19:01.900119] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:16.322 [2024-07-15 16:19:01.900146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.322 [2024-07-15 16:19:01.900198] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:16.323 [2024-07-15 16:19:01.900215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.323 [2024-07-15 16:19:01.900269] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:16.323 [2024-07-15 16:19:01.900285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:16.323 [2024-07-15 16:19:01.900338] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:16.323 [2024-07-15 16:19:01.900355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:16.582 #20 NEW cov: 12143 ft: 13142 corp: 5/97b lim: 25 exec/s: 0 rss: 73Mb L: 24/24 MS: 1 ChangeByte- 00:08:16.582 [2024-07-15 16:19:01.950249] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:16.582 [2024-07-15 16:19:01.950276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.582 [2024-07-15 16:19:01.950328] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:16.582 [2024-07-15 16:19:01.950344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.582 [2024-07-15 16:19:01.950396] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:16.582 [2024-07-15 16:19:01.950411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:16.582 [2024-07-15 16:19:01.950469] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:16.582 [2024-07-15 16:19:01.950485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:16.582 #21 NEW cov: 12143 ft: 13209 corp: 6/121b lim: 25 exec/s: 0 rss: 73Mb L: 24/24 MS: 1 ChangeBinInt- 00:08:16.582 [2024-07-15 16:19:01.990256] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:16.582 [2024-07-15 16:19:01.990283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.582 [2024-07-15 16:19:01.990319] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:16.582 [2024-07-15 16:19:01.990335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.582 [2024-07-15 16:19:01.990388] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:16.582 [2024-07-15 16:19:01.990404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:16.582 #22 NEW cov: 12143 ft: 13678 corp: 7/136b lim: 25 exec/s: 0 rss: 73Mb L: 15/24 MS: 1 EraseBytes- 00:08:16.582 [2024-07-15 16:19:02.030462] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:16.582 [2024-07-15 16:19:02.030489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.582 [2024-07-15 16:19:02.030542] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:16.582 [2024-07-15 16:19:02.030558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.582 [2024-07-15 16:19:02.030610] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:16.582 [2024-07-15 16:19:02.030626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:16.582 [2024-07-15 16:19:02.030676] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:16.582 [2024-07-15 16:19:02.030691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:16.582 #23 NEW cov: 12143 ft: 13766 corp: 8/160b lim: 25 exec/s: 0 rss: 73Mb L: 24/24 MS: 1 ShuffleBytes- 00:08:16.582 [2024-07-15 16:19:02.070572] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:16.582 [2024-07-15 16:19:02.070598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.582 [2024-07-15 16:19:02.070648] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:16.582 [2024-07-15 16:19:02.070663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.582 [2024-07-15 16:19:02.070714] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:16.582 [2024-07-15 16:19:02.070729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:16.582 [2024-07-15 16:19:02.070783] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:16.582 [2024-07-15 16:19:02.070797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:16.582 #24 NEW cov: 12143 ft: 13801 corp: 9/184b lim: 25 exec/s: 0 rss: 73Mb L: 24/24 MS: 1 ChangeBinInt- 00:08:16.582 [2024-07-15 16:19:02.120654] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:16.582 [2024-07-15 16:19:02.120684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.582 [2024-07-15 16:19:02.120728] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:16.582 [2024-07-15 16:19:02.120744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.582 [2024-07-15 16:19:02.120794] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:16.582 [2024-07-15 16:19:02.120809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:16.582 [2024-07-15 16:19:02.120863] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:16.582 [2024-07-15 16:19:02.120877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:16.582 #25 NEW cov: 12143 ft: 13837 corp: 10/208b lim: 25 exec/s: 0 rss: 73Mb L: 24/24 MS: 1 ChangeBit- 00:08:16.841 [2024-07-15 16:19:02.170772] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:16.841 [2024-07-15 16:19:02.170798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.841 [2024-07-15 16:19:02.170847] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:16.841 [2024-07-15 16:19:02.170863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.841 [2024-07-15 16:19:02.170913] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:16.841 [2024-07-15 16:19:02.170928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:16.841 [2024-07-15 16:19:02.170979] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:16.841 [2024-07-15 16:19:02.170994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:16.841 #26 NEW cov: 12143 ft: 13867 corp: 11/232b lim: 25 exec/s: 0 rss: 73Mb L: 24/24 MS: 1 ChangeBinInt- 00:08:16.841 [2024-07-15 16:19:02.220944] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:16.841 [2024-07-15 16:19:02.220970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.841 [2024-07-15 16:19:02.221023] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:16.841 [2024-07-15 16:19:02.221038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.841 [2024-07-15 16:19:02.221087] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:16.841 [2024-07-15 16:19:02.221103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:16.841 [2024-07-15 16:19:02.221159] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:16.841 [2024-07-15 16:19:02.221175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:16.841 #27 NEW cov: 12143 ft: 13934 corp: 12/256b lim: 25 exec/s: 0 rss: 73Mb L: 24/24 MS: 1 ShuffleBytes- 00:08:16.841 [2024-07-15 16:19:02.260915] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:16.841 [2024-07-15 16:19:02.260940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.841 [2024-07-15 16:19:02.260980] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:16.841 [2024-07-15 16:19:02.260994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.841 [2024-07-15 16:19:02.261049] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:16.841 [2024-07-15 16:19:02.261065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:16.841 #28 NEW cov: 12143 ft: 13953 corp: 13/271b lim: 25 exec/s: 0 rss: 73Mb L: 15/24 MS: 1 ChangeByte- 00:08:16.841 [2024-07-15 16:19:02.311188] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:16.841 [2024-07-15 16:19:02.311216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.841 [2024-07-15 16:19:02.311267] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:16.841 [2024-07-15 16:19:02.311283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.841 [2024-07-15 16:19:02.311335] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:16.841 [2024-07-15 16:19:02.311352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:16.841 [2024-07-15 16:19:02.311404] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:16.841 [2024-07-15 16:19:02.311421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:16.841 #29 NEW cov: 12143 ft: 14026 corp: 14/295b lim: 25 exec/s: 0 rss: 73Mb L: 24/24 MS: 1 ChangeBinInt- 00:08:16.841 [2024-07-15 16:19:02.351291] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:16.841 [2024-07-15 16:19:02.351318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.841 [2024-07-15 16:19:02.351364] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:16.841 [2024-07-15 16:19:02.351380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.841 [2024-07-15 16:19:02.351432] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:16.841 [2024-07-15 16:19:02.351448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:16.841 [2024-07-15 16:19:02.351501] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:16.841 [2024-07-15 16:19:02.351517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:16.841 NEW_FUNC[1/1]: 0x1a7eaf0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:16.841 #30 NEW cov: 12166 ft: 14072 corp: 15/319b lim: 25 exec/s: 0 rss: 73Mb L: 24/24 MS: 1 ChangeBit- 00:08:16.841 [2024-07-15 16:19:02.391425] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:16.841 [2024-07-15 16:19:02.391453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.842 [2024-07-15 16:19:02.391501] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:16.842 [2024-07-15 16:19:02.391518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.842 [2024-07-15 16:19:02.391592] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:16.842 [2024-07-15 16:19:02.391610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:16.842 [2024-07-15 16:19:02.391666] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:16.842 [2024-07-15 16:19:02.391683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:16.842 #31 NEW cov: 12166 ft: 14077 corp: 16/343b lim: 25 exec/s: 0 rss: 73Mb L: 24/24 MS: 1 ChangeByte- 00:08:17.100 [2024-07-15 16:19:02.431526] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:17.100 [2024-07-15 16:19:02.431558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.100 [2024-07-15 16:19:02.431610] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:17.100 [2024-07-15 16:19:02.431626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.100 [2024-07-15 16:19:02.431677] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:17.100 [2024-07-15 16:19:02.431691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.100 [2024-07-15 16:19:02.431744] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:17.100 [2024-07-15 16:19:02.431759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:17.100 #32 NEW cov: 12166 ft: 14088 corp: 17/367b lim: 25 exec/s: 32 rss: 73Mb L: 24/24 MS: 1 ChangeBinInt- 00:08:17.100 [2024-07-15 16:19:02.471619] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:17.100 [2024-07-15 16:19:02.471647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.100 [2024-07-15 16:19:02.471693] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:17.100 [2024-07-15 16:19:02.471708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.100 [2024-07-15 16:19:02.471762] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:17.100 [2024-07-15 16:19:02.471778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.100 [2024-07-15 16:19:02.471830] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:17.100 [2024-07-15 16:19:02.471846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:17.100 #33 NEW cov: 12166 ft: 14093 corp: 18/391b lim: 25 exec/s: 33 rss: 73Mb L: 24/24 MS: 1 ChangeByte- 00:08:17.100 [2024-07-15 16:19:02.521903] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:17.100 [2024-07-15 16:19:02.521930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.100 [2024-07-15 16:19:02.521988] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:17.100 [2024-07-15 16:19:02.522004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.100 [2024-07-15 16:19:02.522055] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:17.100 [2024-07-15 16:19:02.522070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.100 [2024-07-15 16:19:02.522123] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:17.100 [2024-07-15 16:19:02.522142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:17.100 [2024-07-15 16:19:02.522196] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:08:17.100 [2024-07-15 16:19:02.522211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:17.100 #34 NEW cov: 12166 ft: 14160 corp: 19/416b lim: 25 exec/s: 34 rss: 73Mb L: 25/25 MS: 1 InsertByte- 00:08:17.100 [2024-07-15 16:19:02.571836] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:17.101 [2024-07-15 16:19:02.571863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.101 [2024-07-15 16:19:02.571907] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:17.101 [2024-07-15 16:19:02.571924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.101 [2024-07-15 16:19:02.571977] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:17.101 [2024-07-15 16:19:02.571993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.101 #35 NEW cov: 12166 ft: 14198 corp: 20/432b lim: 25 exec/s: 35 rss: 73Mb L: 16/25 MS: 1 InsertByte- 00:08:17.101 [2024-07-15 16:19:02.622076] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:17.101 [2024-07-15 16:19:02.622103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.101 [2024-07-15 16:19:02.622151] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:17.101 [2024-07-15 16:19:02.622168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.101 [2024-07-15 16:19:02.622220] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:17.101 [2024-07-15 16:19:02.622235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.101 [2024-07-15 16:19:02.622290] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:17.101 [2024-07-15 16:19:02.622305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:17.101 #36 NEW cov: 12166 ft: 14253 corp: 21/456b lim: 25 exec/s: 36 rss: 74Mb L: 24/25 MS: 1 ChangeByte- 00:08:17.101 [2024-07-15 16:19:02.672104] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:17.101 [2024-07-15 16:19:02.672131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.101 [2024-07-15 16:19:02.672172] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:17.101 [2024-07-15 16:19:02.672186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.101 [2024-07-15 16:19:02.672241] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:17.101 [2024-07-15 16:19:02.672258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.361 #37 NEW cov: 12166 ft: 14264 corp: 22/471b lim: 25 exec/s: 37 rss: 74Mb L: 15/25 MS: 1 ChangeByte- 00:08:17.361 [2024-07-15 16:19:02.712324] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:17.361 [2024-07-15 16:19:02.712350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.361 [2024-07-15 16:19:02.712397] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:17.361 [2024-07-15 16:19:02.712413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.361 [2024-07-15 16:19:02.712465] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:17.361 [2024-07-15 16:19:02.712480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.361 [2024-07-15 16:19:02.712539] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:17.361 [2024-07-15 16:19:02.712555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:17.361 #38 NEW cov: 12166 ft: 14283 corp: 23/495b lim: 25 exec/s: 38 rss: 74Mb L: 24/25 MS: 1 ShuffleBytes- 00:08:17.361 [2024-07-15 16:19:02.752303] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:17.361 [2024-07-15 16:19:02.752331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.361 [2024-07-15 16:19:02.752378] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:17.361 [2024-07-15 16:19:02.752393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.361 [2024-07-15 16:19:02.752447] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:17.361 [2024-07-15 16:19:02.752462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.361 #39 NEW cov: 12166 ft: 14292 corp: 24/510b lim: 25 exec/s: 39 rss: 74Mb L: 15/25 MS: 1 EraseBytes- 00:08:17.361 [2024-07-15 16:19:02.802676] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:17.361 [2024-07-15 16:19:02.802703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.361 [2024-07-15 16:19:02.802758] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:17.361 [2024-07-15 16:19:02.802773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.361 [2024-07-15 16:19:02.802824] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:17.361 [2024-07-15 16:19:02.802838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.361 [2024-07-15 16:19:02.802890] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:17.361 [2024-07-15 16:19:02.802906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:17.361 [2024-07-15 16:19:02.802957] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:08:17.361 [2024-07-15 16:19:02.802972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:17.361 #40 NEW cov: 12166 ft: 14322 corp: 25/535b lim: 25 exec/s: 40 rss: 74Mb L: 25/25 MS: 1 InsertByte- 00:08:17.361 [2024-07-15 16:19:02.842700] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:17.361 [2024-07-15 16:19:02.842726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.361 [2024-07-15 16:19:02.842781] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:17.361 [2024-07-15 16:19:02.842797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.361 [2024-07-15 16:19:02.842853] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:17.361 [2024-07-15 16:19:02.842869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.361 [2024-07-15 16:19:02.842922] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:17.361 [2024-07-15 16:19:02.842937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:17.361 #42 NEW cov: 12166 ft: 14323 corp: 26/559b lim: 25 exec/s: 42 rss: 74Mb L: 24/25 MS: 2 CrossOver-InsertRepeatedBytes- 00:08:17.361 [2024-07-15 16:19:02.882928] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:17.361 [2024-07-15 16:19:02.882954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.361 [2024-07-15 16:19:02.883008] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:17.361 [2024-07-15 16:19:02.883023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.361 [2024-07-15 16:19:02.883074] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:17.361 [2024-07-15 16:19:02.883091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.361 [2024-07-15 16:19:02.883144] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:17.361 [2024-07-15 16:19:02.883158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:17.361 [2024-07-15 16:19:02.883210] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:08:17.361 [2024-07-15 16:19:02.883227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:17.361 #43 NEW cov: 12166 ft: 14331 corp: 27/584b lim: 25 exec/s: 43 rss: 74Mb L: 25/25 MS: 1 CrossOver- 00:08:17.361 [2024-07-15 16:19:02.922892] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:17.361 [2024-07-15 16:19:02.922918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.361 [2024-07-15 16:19:02.922971] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:17.361 [2024-07-15 16:19:02.922988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.361 [2024-07-15 16:19:02.923039] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:17.361 [2024-07-15 16:19:02.923055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.361 [2024-07-15 16:19:02.923109] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:17.361 [2024-07-15 16:19:02.923124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:17.620 #44 NEW cov: 12166 ft: 14339 corp: 28/608b lim: 25 exec/s: 44 rss: 74Mb L: 24/25 MS: 1 ChangeBinInt- 00:08:17.620 [2024-07-15 16:19:02.973027] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:17.620 [2024-07-15 16:19:02.973054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.620 [2024-07-15 16:19:02.973107] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:17.620 [2024-07-15 16:19:02.973122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.620 [2024-07-15 16:19:02.973178] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:17.620 [2024-07-15 16:19:02.973193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.620 [2024-07-15 16:19:02.973247] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:17.620 [2024-07-15 16:19:02.973263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:17.620 #45 NEW cov: 12166 ft: 14342 corp: 29/632b lim: 25 exec/s: 45 rss: 74Mb L: 24/25 MS: 1 ChangeBit- 00:08:17.620 [2024-07-15 16:19:03.023180] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:17.620 [2024-07-15 16:19:03.023206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.620 [2024-07-15 16:19:03.023260] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:17.620 [2024-07-15 16:19:03.023274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.620 [2024-07-15 16:19:03.023328] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:17.620 [2024-07-15 16:19:03.023343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.620 [2024-07-15 16:19:03.023397] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:17.620 [2024-07-15 16:19:03.023413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:17.620 #46 NEW cov: 12166 ft: 14362 corp: 30/656b lim: 25 exec/s: 46 rss: 74Mb L: 24/25 MS: 1 ChangeBit- 00:08:17.620 [2024-07-15 16:19:03.073304] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:17.620 [2024-07-15 16:19:03.073330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.620 [2024-07-15 16:19:03.073379] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:17.620 [2024-07-15 16:19:03.073395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.620 [2024-07-15 16:19:03.073443] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:17.620 [2024-07-15 16:19:03.073459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.620 [2024-07-15 16:19:03.073510] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:17.620 [2024-07-15 16:19:03.073524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:17.620 #47 NEW cov: 12166 ft: 14366 corp: 31/680b lim: 25 exec/s: 47 rss: 74Mb L: 24/25 MS: 1 ChangeBinInt- 00:08:17.620 [2024-07-15 16:19:03.123596] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:17.620 [2024-07-15 16:19:03.123623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.620 [2024-07-15 16:19:03.123677] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:17.620 [2024-07-15 16:19:03.123693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.620 [2024-07-15 16:19:03.123747] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:17.620 [2024-07-15 16:19:03.123764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.620 [2024-07-15 16:19:03.123820] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:17.620 [2024-07-15 16:19:03.123837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:17.620 [2024-07-15 16:19:03.123890] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:08:17.620 [2024-07-15 16:19:03.123906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:17.620 #48 NEW cov: 12166 ft: 14390 corp: 32/705b lim: 25 exec/s: 48 rss: 74Mb L: 25/25 MS: 1 ChangeByte- 00:08:17.620 [2024-07-15 16:19:03.173495] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:17.620 [2024-07-15 16:19:03.173522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.620 [2024-07-15 16:19:03.173581] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:17.620 [2024-07-15 16:19:03.173597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.620 [2024-07-15 16:19:03.173649] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:17.620 [2024-07-15 16:19:03.173666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.620 #49 NEW cov: 12166 ft: 14394 corp: 33/721b lim: 25 exec/s: 49 rss: 74Mb L: 16/25 MS: 1 InsertByte- 00:08:17.878 [2024-07-15 16:19:03.213586] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:17.878 [2024-07-15 16:19:03.213612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.878 [2024-07-15 16:19:03.213663] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:17.878 [2024-07-15 16:19:03.213678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.878 [2024-07-15 16:19:03.213730] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:17.878 [2024-07-15 16:19:03.213745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.878 #50 NEW cov: 12166 ft: 14464 corp: 34/737b lim: 25 exec/s: 50 rss: 74Mb L: 16/25 MS: 1 ShuffleBytes- 00:08:17.878 [2024-07-15 16:19:03.263827] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:17.878 [2024-07-15 16:19:03.263853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.878 [2024-07-15 16:19:03.263907] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:17.878 [2024-07-15 16:19:03.263921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.878 [2024-07-15 16:19:03.263972] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:17.878 [2024-07-15 16:19:03.263989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.878 [2024-07-15 16:19:03.264039] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:17.878 [2024-07-15 16:19:03.264055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:17.878 #51 NEW cov: 12166 ft: 14514 corp: 35/761b lim: 25 exec/s: 51 rss: 74Mb L: 24/25 MS: 1 ChangeByte- 00:08:17.878 [2024-07-15 16:19:03.314107] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:17.878 [2024-07-15 16:19:03.314136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.878 [2024-07-15 16:19:03.314190] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:17.878 [2024-07-15 16:19:03.314207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.878 [2024-07-15 16:19:03.314259] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:17.878 [2024-07-15 16:19:03.314275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.878 [2024-07-15 16:19:03.314327] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:17.878 [2024-07-15 16:19:03.314343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:17.878 [2024-07-15 16:19:03.314394] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:08:17.878 [2024-07-15 16:19:03.314409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:17.878 #52 NEW cov: 12166 ft: 14522 corp: 36/786b lim: 25 exec/s: 52 rss: 74Mb L: 25/25 MS: 1 CrossOver- 00:08:17.878 [2024-07-15 16:19:03.354186] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:17.878 [2024-07-15 16:19:03.354212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.878 [2024-07-15 16:19:03.354267] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:17.878 [2024-07-15 16:19:03.354281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.878 [2024-07-15 16:19:03.354334] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:17.878 [2024-07-15 16:19:03.354350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.878 [2024-07-15 16:19:03.354401] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:17.878 [2024-07-15 16:19:03.354417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:17.878 [2024-07-15 16:19:03.354468] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:08:17.878 [2024-07-15 16:19:03.354485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:17.878 #53 NEW cov: 12166 ft: 14554 corp: 37/811b lim: 25 exec/s: 53 rss: 74Mb L: 25/25 MS: 1 InsertByte- 00:08:17.878 [2024-07-15 16:19:03.394212] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:17.878 [2024-07-15 16:19:03.394238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.878 [2024-07-15 16:19:03.394293] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:17.878 [2024-07-15 16:19:03.394308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.878 [2024-07-15 16:19:03.394359] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:17.878 [2024-07-15 16:19:03.394374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.878 [2024-07-15 16:19:03.394425] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:17.878 [2024-07-15 16:19:03.394444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:17.878 [2024-07-15 16:19:03.434326] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:17.878 [2024-07-15 16:19:03.434352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.878 [2024-07-15 16:19:03.434405] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:17.878 [2024-07-15 16:19:03.434420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.878 [2024-07-15 16:19:03.434471] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:17.878 [2024-07-15 16:19:03.434487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.878 [2024-07-15 16:19:03.434540] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:17.878 [2024-07-15 16:19:03.434557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:17.878 #55 NEW cov: 12166 ft: 14556 corp: 38/835b lim: 25 exec/s: 27 rss: 74Mb L: 24/25 MS: 2 ChangeBinInt-ShuffleBytes- 00:08:17.878 #55 DONE cov: 12166 ft: 14556 corp: 38/835b lim: 25 exec/s: 27 rss: 74Mb 00:08:17.878 Done 55 runs in 2 second(s) 00:08:18.137 16:19:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_23.conf /var/tmp/suppress_nvmf_fuzz 00:08:18.137 16:19:03 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:18.137 16:19:03 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:18.137 16:19:03 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 24 1 0x1 00:08:18.137 16:19:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=24 00:08:18.137 16:19:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:18.137 16:19:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:18.137 16:19:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:08:18.137 16:19:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_24.conf 00:08:18.137 16:19:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:18.137 16:19:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:18.137 16:19:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 24 00:08:18.137 16:19:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4424 00:08:18.137 16:19:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:08:18.137 16:19:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' 00:08:18.137 16:19:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4424"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:18.137 16:19:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:18.137 16:19:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:18.137 16:19:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' -c /tmp/fuzz_json_24.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 -Z 24 00:08:18.137 [2024-07-15 16:19:03.638073] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:08:18.137 [2024-07-15 16:19:03.638153] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1523138 ] 00:08:18.137 EAL: No free 2048 kB hugepages reported on node 1 00:08:18.396 [2024-07-15 16:19:03.845319] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.396 [2024-07-15 16:19:03.918468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.655 [2024-07-15 16:19:03.977951] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:18.655 [2024-07-15 16:19:03.994125] tcp.c: 993:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4424 *** 00:08:18.655 INFO: Running with entropic power schedule (0xFF, 100). 00:08:18.655 INFO: Seed: 1213993805 00:08:18.655 INFO: Loaded 1 modules (357850 inline 8-bit counters): 357850 [0x29ab30c, 0x2a028e6), 00:08:18.655 INFO: Loaded 1 PC tables (357850 PCs): 357850 [0x2a028e8,0x2f78688), 00:08:18.655 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:08:18.655 INFO: A corpus is not provided, starting from an empty corpus 00:08:18.655 #2 INITED exec/s: 0 rss: 65Mb 00:08:18.655 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:18.655 This may also happen if the target rejected all inputs we tried so far 00:08:18.655 [2024-07-15 16:19:04.071778] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.655 [2024-07-15 16:19:04.071817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.655 [2024-07-15 16:19:04.071934] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.655 [2024-07-15 16:19:04.071951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.655 [2024-07-15 16:19:04.072038] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.655 [2024-07-15 16:19:04.072056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:18.655 [2024-07-15 16:19:04.072146] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.655 [2024-07-15 16:19:04.072167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:18.913 NEW_FUNC[1/699]: 0x4af920 in fuzz_nvm_compare_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:685 00:08:18.914 NEW_FUNC[2/699]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:18.914 #5 NEW cov: 11994 ft: 11994 corp: 2/100b lim: 100 exec/s: 0 rss: 72Mb L: 99/99 MS: 3 CrossOver-CopyPart-InsertRepeatedBytes- 00:08:18.914 [2024-07-15 16:19:04.412078] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.914 [2024-07-15 16:19:04.412130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.914 [2024-07-15 16:19:04.412232] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.914 [2024-07-15 16:19:04.412258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.914 [2024-07-15 16:19:04.412355] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.914 [2024-07-15 16:19:04.412383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:18.914 #6 NEW cov: 12124 ft: 13042 corp: 3/161b lim: 100 exec/s: 0 rss: 72Mb L: 61/99 MS: 1 EraseBytes- 00:08:18.914 [2024-07-15 16:19:04.482847] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.914 [2024-07-15 16:19:04.482876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.914 [2024-07-15 16:19:04.482951] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.914 [2024-07-15 16:19:04.482970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.914 [2024-07-15 16:19:04.483050] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.914 [2024-07-15 16:19:04.483066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:18.914 [2024-07-15 16:19:04.483156] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.914 [2024-07-15 16:19:04.483172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:19.172 #7 NEW cov: 12130 ft: 13218 corp: 4/260b lim: 100 exec/s: 0 rss: 72Mb L: 99/99 MS: 1 CopyPart- 00:08:19.172 [2024-07-15 16:19:04.532863] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.172 [2024-07-15 16:19:04.532892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.172 [2024-07-15 16:19:04.532973] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.172 [2024-07-15 16:19:04.532990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.172 [2024-07-15 16:19:04.533060] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.172 [2024-07-15 16:19:04.533081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.172 [2024-07-15 16:19:04.533165] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.173 [2024-07-15 16:19:04.533183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:19.173 #8 NEW cov: 12215 ft: 13450 corp: 5/359b lim: 100 exec/s: 0 rss: 72Mb L: 99/99 MS: 1 ChangeBit- 00:08:19.173 [2024-07-15 16:19:04.602604] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.173 [2024-07-15 16:19:04.602634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.173 [2024-07-15 16:19:04.602684] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.173 [2024-07-15 16:19:04.602703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.173 [2024-07-15 16:19:04.602774] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.173 [2024-07-15 16:19:04.602794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.173 #9 NEW cov: 12215 ft: 13629 corp: 6/420b lim: 100 exec/s: 0 rss: 72Mb L: 61/99 MS: 1 ChangeBit- 00:08:19.173 [2024-07-15 16:19:04.663536] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.173 [2024-07-15 16:19:04.663565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.173 [2024-07-15 16:19:04.663647] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.173 [2024-07-15 16:19:04.663666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.173 [2024-07-15 16:19:04.663727] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.173 [2024-07-15 16:19:04.663745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.173 [2024-07-15 16:19:04.663839] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.173 [2024-07-15 16:19:04.663856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:19.173 [2024-07-15 16:19:04.663947] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:4 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.173 [2024-07-15 16:19:04.663964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:19.173 #10 NEW cov: 12215 ft: 13726 corp: 7/520b lim: 100 exec/s: 0 rss: 72Mb L: 100/100 MS: 1 CrossOver- 00:08:19.173 [2024-07-15 16:19:04.713715] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1125899906842624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.173 [2024-07-15 16:19:04.713746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.173 [2024-07-15 16:19:04.713827] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.173 [2024-07-15 16:19:04.713846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.173 [2024-07-15 16:19:04.713913] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.173 [2024-07-15 16:19:04.713931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.173 [2024-07-15 16:19:04.714008] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.173 [2024-07-15 16:19:04.714027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:19.173 [2024-07-15 16:19:04.714118] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:4 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.173 [2024-07-15 16:19:04.714136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:19.173 #11 NEW cov: 12215 ft: 13796 corp: 8/620b lim: 100 exec/s: 0 rss: 72Mb L: 100/100 MS: 1 ChangeBit- 00:08:19.432 [2024-07-15 16:19:04.773266] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.432 [2024-07-15 16:19:04.773294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.432 [2024-07-15 16:19:04.773352] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.432 [2024-07-15 16:19:04.773369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.432 [2024-07-15 16:19:04.773424] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.432 [2024-07-15 16:19:04.773444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.432 #12 NEW cov: 12215 ft: 13818 corp: 9/681b lim: 100 exec/s: 0 rss: 72Mb L: 61/100 MS: 1 CopyPart- 00:08:19.432 [2024-07-15 16:19:04.823172] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.432 [2024-07-15 16:19:04.823198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.432 [2024-07-15 16:19:04.823273] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.432 [2024-07-15 16:19:04.823292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.432 #13 NEW cov: 12215 ft: 14175 corp: 10/732b lim: 100 exec/s: 0 rss: 72Mb L: 51/100 MS: 1 EraseBytes- 00:08:19.432 [2024-07-15 16:19:04.873314] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.432 [2024-07-15 16:19:04.873343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.432 [2024-07-15 16:19:04.873395] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.432 [2024-07-15 16:19:04.873415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.432 #14 NEW cov: 12215 ft: 14240 corp: 11/773b lim: 100 exec/s: 0 rss: 72Mb L: 41/100 MS: 1 EraseBytes- 00:08:19.432 [2024-07-15 16:19:04.934193] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.432 [2024-07-15 16:19:04.934220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.432 [2024-07-15 16:19:04.934295] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.432 [2024-07-15 16:19:04.934313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.432 [2024-07-15 16:19:04.934394] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.432 [2024-07-15 16:19:04.934414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.432 [2024-07-15 16:19:04.934520] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.432 [2024-07-15 16:19:04.934545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:19.432 NEW_FUNC[1/1]: 0x1a7eaf0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:19.432 #15 NEW cov: 12238 ft: 14281 corp: 12/872b lim: 100 exec/s: 0 rss: 73Mb L: 99/100 MS: 1 ShuffleBytes- 00:08:19.432 [2024-07-15 16:19:04.994014] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.432 [2024-07-15 16:19:04.994046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.432 [2024-07-15 16:19:04.994125] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.432 [2024-07-15 16:19:04.994148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.432 [2024-07-15 16:19:04.994210] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:16044073672507392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.432 [2024-07-15 16:19:04.994232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.691 #16 NEW cov: 12238 ft: 14302 corp: 13/933b lim: 100 exec/s: 0 rss: 73Mb L: 61/100 MS: 1 ChangeByte- 00:08:19.691 [2024-07-15 16:19:05.044138] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18445899648779485183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.691 [2024-07-15 16:19:05.044169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.691 [2024-07-15 16:19:05.044232] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.691 [2024-07-15 16:19:05.044253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.691 [2024-07-15 16:19:05.044298] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.691 [2024-07-15 16:19:05.044319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.691 #17 NEW cov: 12238 ft: 14370 corp: 14/994b lim: 100 exec/s: 17 rss: 73Mb L: 61/100 MS: 1 ChangeBinInt- 00:08:19.691 [2024-07-15 16:19:05.114905] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.691 [2024-07-15 16:19:05.114939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.691 [2024-07-15 16:19:05.114994] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.691 [2024-07-15 16:19:05.115011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.691 [2024-07-15 16:19:05.115084] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.691 [2024-07-15 16:19:05.115102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.691 [2024-07-15 16:19:05.115183] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:720575940379279360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.691 [2024-07-15 16:19:05.115204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:19.691 #18 NEW cov: 12238 ft: 14419 corp: 15/1093b lim: 100 exec/s: 18 rss: 73Mb L: 99/100 MS: 1 CrossOver- 00:08:19.691 [2024-07-15 16:19:05.164735] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.691 [2024-07-15 16:19:05.164767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.691 [2024-07-15 16:19:05.164822] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.691 [2024-07-15 16:19:05.164843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.691 [2024-07-15 16:19:05.164910] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.691 [2024-07-15 16:19:05.164930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.692 #19 NEW cov: 12238 ft: 14536 corp: 16/1155b lim: 100 exec/s: 19 rss: 73Mb L: 62/100 MS: 1 CrossOver- 00:08:19.692 [2024-07-15 16:19:05.214986] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.692 [2024-07-15 16:19:05.215016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.692 [2024-07-15 16:19:05.215089] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.692 [2024-07-15 16:19:05.215107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.692 [2024-07-15 16:19:05.215180] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.692 [2024-07-15 16:19:05.215197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.692 #20 NEW cov: 12238 ft: 14569 corp: 17/1234b lim: 100 exec/s: 20 rss: 73Mb L: 79/100 MS: 1 CrossOver- 00:08:19.951 [2024-07-15 16:19:05.285190] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.951 [2024-07-15 16:19:05.285221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.951 [2024-07-15 16:19:05.285276] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.951 [2024-07-15 16:19:05.285294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.951 [2024-07-15 16:19:05.285352] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:62672162783232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.951 [2024-07-15 16:19:05.285371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.952 #21 NEW cov: 12238 ft: 14609 corp: 18/1296b lim: 100 exec/s: 21 rss: 73Mb L: 62/100 MS: 1 InsertByte- 00:08:19.952 [2024-07-15 16:19:05.355468] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:65281 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.952 [2024-07-15 16:19:05.355497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.952 [2024-07-15 16:19:05.355557] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.952 [2024-07-15 16:19:05.355575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.952 [2024-07-15 16:19:05.355648] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.952 [2024-07-15 16:19:05.355668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.952 #22 NEW cov: 12238 ft: 14627 corp: 19/1357b lim: 100 exec/s: 22 rss: 73Mb L: 61/100 MS: 1 ChangeBinInt- 00:08:19.952 [2024-07-15 16:19:05.415295] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.952 [2024-07-15 16:19:05.415323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.952 [2024-07-15 16:19:05.415389] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.952 [2024-07-15 16:19:05.415409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.952 #23 NEW cov: 12238 ft: 14640 corp: 20/1408b lim: 100 exec/s: 23 rss: 73Mb L: 51/100 MS: 1 CrossOver- 00:08:19.952 [2024-07-15 16:19:05.465918] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.952 [2024-07-15 16:19:05.465946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.952 [2024-07-15 16:19:05.466012] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.952 [2024-07-15 16:19:05.466040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.952 [2024-07-15 16:19:05.466096] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.952 [2024-07-15 16:19:05.466113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.952 [2024-07-15 16:19:05.466205] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.952 [2024-07-15 16:19:05.466220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:19.952 #24 NEW cov: 12238 ft: 14651 corp: 21/1507b lim: 100 exec/s: 24 rss: 73Mb L: 99/100 MS: 1 ChangeByte- 00:08:19.952 [2024-07-15 16:19:05.525919] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.952 [2024-07-15 16:19:05.525945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.952 [2024-07-15 16:19:05.526018] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.952 [2024-07-15 16:19:05.526035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.952 [2024-07-15 16:19:05.526106] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.952 [2024-07-15 16:19:05.526123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.212 #25 NEW cov: 12238 ft: 14666 corp: 22/1568b lim: 100 exec/s: 25 rss: 73Mb L: 61/100 MS: 1 CMP- DE: "\377\001\000\000"- 00:08:20.212 [2024-07-15 16:19:05.576138] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.212 [2024-07-15 16:19:05.576168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.212 [2024-07-15 16:19:05.576248] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.212 [2024-07-15 16:19:05.576270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.212 [2024-07-15 16:19:05.576315] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.212 [2024-07-15 16:19:05.576335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.212 #26 NEW cov: 12238 ft: 14681 corp: 23/1630b lim: 100 exec/s: 26 rss: 73Mb L: 62/100 MS: 1 InsertByte- 00:08:20.212 [2024-07-15 16:19:05.625924] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.212 [2024-07-15 16:19:05.625952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.212 [2024-07-15 16:19:05.626022] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.212 [2024-07-15 16:19:05.626041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.212 #27 NEW cov: 12238 ft: 14696 corp: 24/1672b lim: 100 exec/s: 27 rss: 73Mb L: 42/100 MS: 1 EraseBytes- 00:08:20.212 [2024-07-15 16:19:05.686718] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.212 [2024-07-15 16:19:05.686744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.212 [2024-07-15 16:19:05.686817] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.212 [2024-07-15 16:19:05.686833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.212 [2024-07-15 16:19:05.686897] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.212 [2024-07-15 16:19:05.686915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.212 [2024-07-15 16:19:05.687003] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.212 [2024-07-15 16:19:05.687022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:20.212 #28 NEW cov: 12238 ft: 14719 corp: 25/1771b lim: 100 exec/s: 28 rss: 74Mb L: 99/100 MS: 1 ShuffleBytes- 00:08:20.212 [2024-07-15 16:19:05.747216] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.212 [2024-07-15 16:19:05.747249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.212 [2024-07-15 16:19:05.747325] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.212 [2024-07-15 16:19:05.747346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.212 [2024-07-15 16:19:05.747412] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.212 [2024-07-15 16:19:05.747430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.212 [2024-07-15 16:19:05.747525] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.212 [2024-07-15 16:19:05.747551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:20.212 [2024-07-15 16:19:05.747648] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:4 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.212 [2024-07-15 16:19:05.747670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:20.212 #29 NEW cov: 12238 ft: 14764 corp: 26/1871b lim: 100 exec/s: 29 rss: 74Mb L: 100/100 MS: 1 CrossOver- 00:08:20.472 [2024-07-15 16:19:05.796821] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.472 [2024-07-15 16:19:05.796851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.472 [2024-07-15 16:19:05.796911] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.472 [2024-07-15 16:19:05.796930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.472 [2024-07-15 16:19:05.796993] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:16044073672507392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.472 [2024-07-15 16:19:05.797013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.472 #35 NEW cov: 12238 ft: 14779 corp: 27/1932b lim: 100 exec/s: 35 rss: 74Mb L: 61/100 MS: 1 ShuffleBytes- 00:08:20.473 [2024-07-15 16:19:05.847287] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1152921504606846976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.473 [2024-07-15 16:19:05.847317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.473 [2024-07-15 16:19:05.847389] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.473 [2024-07-15 16:19:05.847406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.473 [2024-07-15 16:19:05.847476] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.473 [2024-07-15 16:19:05.847495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.473 [2024-07-15 16:19:05.847582] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.473 [2024-07-15 16:19:05.847602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:20.473 #36 NEW cov: 12238 ft: 14844 corp: 28/2031b lim: 100 exec/s: 36 rss: 74Mb L: 99/100 MS: 1 ChangeBit- 00:08:20.473 [2024-07-15 16:19:05.897072] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.473 [2024-07-15 16:19:05.897100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.473 [2024-07-15 16:19:05.897168] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.473 [2024-07-15 16:19:05.897184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.473 [2024-07-15 16:19:05.897266] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.473 [2024-07-15 16:19:05.897288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.473 #37 NEW cov: 12238 ft: 14845 corp: 29/2092b lim: 100 exec/s: 37 rss: 74Mb L: 61/100 MS: 1 ShuffleBytes- 00:08:20.473 [2024-07-15 16:19:05.947327] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.473 [2024-07-15 16:19:05.947355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.473 [2024-07-15 16:19:05.947409] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.473 [2024-07-15 16:19:05.947427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.473 [2024-07-15 16:19:05.947502] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:62672162783232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.473 [2024-07-15 16:19:05.947519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.473 #38 NEW cov: 12238 ft: 14881 corp: 30/2154b lim: 100 exec/s: 38 rss: 74Mb L: 62/100 MS: 1 ChangeByte- 00:08:20.473 [2024-07-15 16:19:06.007489] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.473 [2024-07-15 16:19:06.007519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.473 [2024-07-15 16:19:06.007571] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.473 [2024-07-15 16:19:06.007591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.473 [2024-07-15 16:19:06.007685] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.473 [2024-07-15 16:19:06.007703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.473 #39 NEW cov: 12238 ft: 14892 corp: 31/2215b lim: 100 exec/s: 19 rss: 74Mb L: 61/100 MS: 1 CrossOver- 00:08:20.473 #39 DONE cov: 12238 ft: 14892 corp: 31/2215b lim: 100 exec/s: 19 rss: 74Mb 00:08:20.473 ###### Recommended dictionary. ###### 00:08:20.473 "\377\001\000\000" # Uses: 1 00:08:20.473 ###### End of recommended dictionary. ###### 00:08:20.473 Done 39 runs in 2 second(s) 00:08:20.732 16:19:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_24.conf /var/tmp/suppress_nvmf_fuzz 00:08:20.732 16:19:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:20.732 16:19:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:20.732 16:19:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@79 -- # trap - SIGINT SIGTERM EXIT 00:08:20.732 00:08:20.732 real 1m4.872s 00:08:20.732 user 1m40.656s 00:08:20.732 sys 0m7.346s 00:08:20.732 16:19:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:20.732 16:19:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:20.732 ************************************ 00:08:20.733 END TEST nvmf_llvm_fuzz 00:08:20.733 ************************************ 00:08:20.733 16:19:06 llvm_fuzz -- common/autotest_common.sh@1142 -- # return 0 00:08:20.733 16:19:06 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:08:20.733 16:19:06 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:08:20.733 16:19:06 llvm_fuzz -- fuzz/llvm.sh@63 -- # run_test vfio_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:08:20.733 16:19:06 llvm_fuzz -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:20.733 16:19:06 llvm_fuzz -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:20.733 16:19:06 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:20.733 ************************************ 00:08:20.733 START TEST vfio_llvm_fuzz 00:08:20.733 ************************************ 00:08:20.733 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:08:20.995 * Looking for test storage... 00:08:20.995 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@64 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=/usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER=y 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_FC=n 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:08:20.995 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:08:20.996 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:08:20.996 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:08:20.996 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:08:20.996 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:08:20.996 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:20.996 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:08:20.996 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_URING=n 00:08:20.996 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:08:20.996 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:08:20.996 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:08:20.996 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:08:20.996 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:20.996 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:20.996 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:08:20.996 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:20.996 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:20.996 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:20.996 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:20.996 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:20.996 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:20.996 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:20.996 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:08:20.996 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:20.996 #define SPDK_CONFIG_H 00:08:20.996 #define SPDK_CONFIG_APPS 1 00:08:20.996 #define SPDK_CONFIG_ARCH native 00:08:20.996 #undef SPDK_CONFIG_ASAN 00:08:20.996 #undef SPDK_CONFIG_AVAHI 00:08:20.996 #undef SPDK_CONFIG_CET 00:08:20.996 #define SPDK_CONFIG_COVERAGE 1 00:08:20.996 #define SPDK_CONFIG_CROSS_PREFIX 00:08:20.996 #undef SPDK_CONFIG_CRYPTO 00:08:20.996 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:20.996 #undef SPDK_CONFIG_CUSTOMOCF 00:08:20.996 #undef SPDK_CONFIG_DAOS 00:08:20.996 #define SPDK_CONFIG_DAOS_DIR 00:08:20.996 #define SPDK_CONFIG_DEBUG 1 00:08:20.996 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:20.996 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:08:20.996 #define SPDK_CONFIG_DPDK_INC_DIR 00:08:20.996 #define SPDK_CONFIG_DPDK_LIB_DIR 00:08:20.996 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:20.996 #undef SPDK_CONFIG_DPDK_UADK 00:08:20.996 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:08:20.996 #define SPDK_CONFIG_EXAMPLES 1 00:08:20.996 #undef SPDK_CONFIG_FC 00:08:20.996 #define SPDK_CONFIG_FC_PATH 00:08:20.996 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:20.996 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:20.996 #undef SPDK_CONFIG_FUSE 00:08:20.996 #define SPDK_CONFIG_FUZZER 1 00:08:20.996 #define SPDK_CONFIG_FUZZER_LIB /usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:08:20.996 #undef SPDK_CONFIG_GOLANG 00:08:20.996 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:20.996 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:08:20.996 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:20.996 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:08:20.996 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:20.996 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:20.996 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:20.996 #define SPDK_CONFIG_IDXD 1 00:08:20.996 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:20.996 #undef SPDK_CONFIG_IPSEC_MB 00:08:20.996 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:20.996 #define SPDK_CONFIG_ISAL 1 00:08:20.996 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:20.996 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:20.996 #define SPDK_CONFIG_LIBDIR 00:08:20.996 #undef SPDK_CONFIG_LTO 00:08:20.996 #define SPDK_CONFIG_MAX_LCORES 128 00:08:20.996 #define SPDK_CONFIG_NVME_CUSE 1 00:08:20.996 #undef SPDK_CONFIG_OCF 00:08:20.996 #define SPDK_CONFIG_OCF_PATH 00:08:20.996 #define SPDK_CONFIG_OPENSSL_PATH 00:08:20.996 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:20.996 #define SPDK_CONFIG_PGO_DIR 00:08:20.996 #undef SPDK_CONFIG_PGO_USE 00:08:20.996 #define SPDK_CONFIG_PREFIX /usr/local 00:08:20.996 #undef SPDK_CONFIG_RAID5F 00:08:20.996 #undef SPDK_CONFIG_RBD 00:08:20.996 #define SPDK_CONFIG_RDMA 1 00:08:20.996 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:20.996 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:20.996 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:20.996 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:20.996 #undef SPDK_CONFIG_SHARED 00:08:20.996 #undef SPDK_CONFIG_SMA 00:08:20.996 #define SPDK_CONFIG_TESTS 1 00:08:20.996 #undef SPDK_CONFIG_TSAN 00:08:20.996 #define SPDK_CONFIG_UBLK 1 00:08:20.996 #define SPDK_CONFIG_UBSAN 1 00:08:20.996 #undef SPDK_CONFIG_UNIT_TESTS 00:08:20.996 #undef SPDK_CONFIG_URING 00:08:20.996 #define SPDK_CONFIG_URING_PATH 00:08:20.996 #undef SPDK_CONFIG_URING_ZNS 00:08:20.996 #undef SPDK_CONFIG_USDT 00:08:20.996 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:20.996 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:20.996 #define SPDK_CONFIG_VFIO_USER 1 00:08:20.996 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:20.996 #define SPDK_CONFIG_VHOST 1 00:08:20.996 #define SPDK_CONFIG_VIRTIO 1 00:08:20.996 #undef SPDK_CONFIG_VTUNE 00:08:20.996 #define SPDK_CONFIG_VTUNE_DIR 00:08:20.996 #define SPDK_CONFIG_WERROR 1 00:08:20.996 #define SPDK_CONFIG_WPDK_DIR 00:08:20.996 #undef SPDK_CONFIG_XNVME 00:08:20.996 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:20.996 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:20.996 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:08:20.996 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:20.996 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:20.996 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:20.996 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.996 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.996 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.996 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:08:20.996 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.996 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:08:20.996 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # uname -s 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@122 -- # : 1 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@124 -- # : 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@126 -- # : 0 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:08:20.997 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@138 -- # : 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@140 -- # : true 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@142 -- # : 0 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@167 -- # : 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@169 -- # : 0 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@200 -- # cat 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:20.998 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@263 -- # export valgrind= 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@263 -- # valgrind= 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@269 -- # uname -s 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@279 -- # MAKE=make 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j72 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@299 -- # TEST_MODE= 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@318 -- # [[ -z 1523529 ]] 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@318 -- # kill -0 1523529 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@331 -- # local mount target_dir 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.JbHqdI 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio /tmp/spdk.JbHqdI/tests/vfio /tmp/spdk.JbHqdI 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@327 -- # df -T 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=893108224 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4391321600 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=86971011072 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=94508576768 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=7537565696 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=47198650368 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=47254286336 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=55635968 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=18895826944 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=18901716992 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=5890048 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=47253729280 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=47254290432 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=561152 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=9450852352 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=9450856448 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:08:20.999 * Looking for test storage... 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@368 -- # local target_space new_size 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:20.999 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:21.000 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mount=/ 00:08:21.000 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # target_space=86971011072 00:08:21.000 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:08:21.000 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:08:21.000 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:08:21.000 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:08:21.000 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:08:21.000 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@381 -- # new_size=9752158208 00:08:21.000 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:21.000 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:21.000 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:21.000 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:21.000 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:21.000 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@389 -- # return 0 00:08:21.000 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1682 -- # set -o errtrace 00:08:21.000 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:08:21.000 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:21.000 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:21.000 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1687 -- # true 00:08:21.000 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1689 -- # xtrace_fd 00:08:21.000 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:21.000 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:21.000 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:08:21.000 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:08:21.000 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:21.000 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:21.000 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:21.000 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:08:21.000 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@65 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/../common.sh 00:08:21.000 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:08:21.000 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@67 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:08:21.000 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:08:21.000 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # fuzz_num=7 00:08:21.000 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@69 -- # (( fuzz_num != 0 )) 00:08:21.000 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@71 -- # trap 'cleanup /tmp/vfio-user-* /var/tmp/suppress_vfio_fuzz; exit 1' SIGINT SIGTERM EXIT 00:08:21.000 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@74 -- # mem_size=0 00:08:21.000 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@75 -- # [[ 1 -eq 1 ]] 00:08:21.000 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@76 -- # start_llvm_fuzz_short 7 1 00:08:21.000 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=7 00:08:21.000 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:08:21.000 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:08:21.000 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:21.000 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:08:21.000 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=0 00:08:21.000 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:21.000 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:21.000 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:08:21.000 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-0 00:08:21.000 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-0/domain/1 00:08:21.000 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-0/domain/2 00:08:21.000 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-0/fuzz_vfio_json.conf 00:08:21.000 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:21.000 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:21.000 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-0 /tmp/vfio-user-0/domain/1 /tmp/vfio-user-0/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:08:21.000 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-0/domain/1%; 00:08:21.000 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-0/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:21.000 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:21.000 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:21.000 16:19:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-0/domain/1 -c /tmp/vfio-user-0/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 -Y /tmp/vfio-user-0/domain/2 -r /tmp/vfio-user-0/spdk0.sock -Z 0 00:08:21.260 [2024-07-15 16:19:06.570578] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:08:21.260 [2024-07-15 16:19:06.570681] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1523672 ] 00:08:21.260 EAL: No free 2048 kB hugepages reported on node 1 00:08:21.260 [2024-07-15 16:19:06.653456] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.260 [2024-07-15 16:19:06.739065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.518 INFO: Running with entropic power schedule (0xFF, 100). 00:08:21.518 INFO: Seed: 4143004775 00:08:21.518 INFO: Loaded 1 modules (355086 inline 8-bit counters): 355086 [0x296db0c, 0x29c461a), 00:08:21.518 INFO: Loaded 1 PC tables (355086 PCs): 355086 [0x29c4620,0x2f2f700), 00:08:21.518 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:08:21.518 INFO: A corpus is not provided, starting from an empty corpus 00:08:21.518 #2 INITED exec/s: 0 rss: 66Mb 00:08:21.518 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:21.518 This may also happen if the target rejected all inputs we tried so far 00:08:21.518 [2024-07-15 16:19:06.993370] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: enabling controller 00:08:22.037 NEW_FUNC[1/658]: 0x4838a0 in fuzz_vfio_user_region_rw /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:84 00:08:22.037 NEW_FUNC[2/658]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:22.037 #21 NEW cov: 10956 ft: 10667 corp: 2/7b lim: 6 exec/s: 0 rss: 72Mb L: 6/6 MS: 4 ChangeByte-InsertRepeatedBytes-ChangeByte-CopyPart- 00:08:22.037 NEW_FUNC[1/1]: 0x1404df0 in cq_tailp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/vfio_user.c:586 00:08:22.037 #22 NEW cov: 10973 ft: 14350 corp: 3/13b lim: 6 exec/s: 0 rss: 73Mb L: 6/6 MS: 1 ShuffleBytes- 00:08:22.296 NEW_FUNC[1/1]: 0x1a4b020 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:22.296 #23 NEW cov: 10990 ft: 15076 corp: 4/19b lim: 6 exec/s: 0 rss: 74Mb L: 6/6 MS: 1 CrossOver- 00:08:22.556 #24 NEW cov: 10990 ft: 16106 corp: 5/25b lim: 6 exec/s: 24 rss: 74Mb L: 6/6 MS: 1 CrossOver- 00:08:22.816 #25 NEW cov: 10990 ft: 16676 corp: 6/31b lim: 6 exec/s: 25 rss: 74Mb L: 6/6 MS: 1 ChangeByte- 00:08:22.816 #26 NEW cov: 10990 ft: 17048 corp: 7/37b lim: 6 exec/s: 26 rss: 74Mb L: 6/6 MS: 1 ChangeBit- 00:08:23.075 #32 NEW cov: 10990 ft: 17194 corp: 8/43b lim: 6 exec/s: 32 rss: 74Mb L: 6/6 MS: 1 CopyPart- 00:08:23.334 #33 NEW cov: 10990 ft: 17626 corp: 9/49b lim: 6 exec/s: 33 rss: 74Mb L: 6/6 MS: 1 CopyPart- 00:08:23.334 #39 NEW cov: 10997 ft: 17692 corp: 10/55b lim: 6 exec/s: 39 rss: 74Mb L: 6/6 MS: 1 ChangeBit- 00:08:23.593 #49 NEW cov: 10997 ft: 17716 corp: 11/61b lim: 6 exec/s: 24 rss: 74Mb L: 6/6 MS: 5 InsertRepeatedBytes-CrossOver-ChangeBit-InsertRepeatedBytes-InsertByte- 00:08:23.593 #49 DONE cov: 10997 ft: 17716 corp: 11/61b lim: 6 exec/s: 24 rss: 74Mb 00:08:23.593 Done 49 runs in 2 second(s) 00:08:23.593 [2024-07-15 16:19:09.099725] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: disabling controller 00:08:23.852 16:19:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-0 /var/tmp/suppress_vfio_fuzz 00:08:23.852 16:19:09 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:23.852 16:19:09 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:23.852 16:19:09 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:08:23.852 16:19:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=1 00:08:23.852 16:19:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:23.852 16:19:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:23.852 16:19:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:08:23.852 16:19:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-1 00:08:23.852 16:19:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-1/domain/1 00:08:23.852 16:19:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-1/domain/2 00:08:23.852 16:19:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-1/fuzz_vfio_json.conf 00:08:23.852 16:19:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:23.852 16:19:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:23.853 16:19:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-1 /tmp/vfio-user-1/domain/1 /tmp/vfio-user-1/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:08:23.853 16:19:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-1/domain/1%; 00:08:23.853 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-1/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:23.853 16:19:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:23.853 16:19:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:23.853 16:19:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-1/domain/1 -c /tmp/vfio-user-1/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 -Y /tmp/vfio-user-1/domain/2 -r /tmp/vfio-user-1/spdk1.sock -Z 1 00:08:23.853 [2024-07-15 16:19:09.397901] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:08:23.853 [2024-07-15 16:19:09.397978] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1524091 ] 00:08:24.112 EAL: No free 2048 kB hugepages reported on node 1 00:08:24.112 [2024-07-15 16:19:09.475779] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.112 [2024-07-15 16:19:09.559147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.372 INFO: Running with entropic power schedule (0xFF, 100). 00:08:24.372 INFO: Seed: 2665024732 00:08:24.372 INFO: Loaded 1 modules (355086 inline 8-bit counters): 355086 [0x296db0c, 0x29c461a), 00:08:24.372 INFO: Loaded 1 PC tables (355086 PCs): 355086 [0x29c4620,0x2f2f700), 00:08:24.372 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:08:24.372 INFO: A corpus is not provided, starting from an empty corpus 00:08:24.372 #2 INITED exec/s: 0 rss: 66Mb 00:08:24.372 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:24.372 This may also happen if the target rejected all inputs we tried so far 00:08:24.372 [2024-07-15 16:19:09.811049] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: enabling controller 00:08:24.372 [2024-07-15 16:19:09.864555] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:24.372 [2024-07-15 16:19:09.864581] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:24.372 [2024-07-15 16:19:09.864616] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:24.889 NEW_FUNC[1/661]: 0x483e40 in fuzz_vfio_user_version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:71 00:08:24.889 NEW_FUNC[2/661]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:24.889 #5 NEW cov: 10954 ft: 10871 corp: 2/5b lim: 4 exec/s: 0 rss: 72Mb L: 4/4 MS: 3 InsertByte-ShuffleBytes-CMP- DE: "\037\000"- 00:08:24.889 [2024-07-15 16:19:10.341489] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:24.889 [2024-07-15 16:19:10.341543] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:24.889 [2024-07-15 16:19:10.341632] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:24.889 #11 NEW cov: 10969 ft: 14999 corp: 3/9b lim: 4 exec/s: 0 rss: 73Mb L: 4/4 MS: 1 PersAutoDict- DE: "\037\000"- 00:08:25.148 [2024-07-15 16:19:10.521488] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:25.148 [2024-07-15 16:19:10.521515] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:25.148 [2024-07-15 16:19:10.521540] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:25.148 NEW_FUNC[1/1]: 0x1a4b020 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:25.148 #12 NEW cov: 10986 ft: 15427 corp: 4/13b lim: 4 exec/s: 0 rss: 74Mb L: 4/4 MS: 1 ChangeBit- 00:08:25.148 [2024-07-15 16:19:10.693172] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:25.148 [2024-07-15 16:19:10.693195] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:25.148 [2024-07-15 16:19:10.693228] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:25.407 #22 NEW cov: 10986 ft: 15837 corp: 5/17b lim: 4 exec/s: 22 rss: 74Mb L: 4/4 MS: 5 CopyPart-ShuffleBytes-InsertByte-InsertByte-InsertByte- 00:08:25.407 [2024-07-15 16:19:10.889029] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:25.407 [2024-07-15 16:19:10.889051] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:25.407 [2024-07-15 16:19:10.889069] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:25.666 #33 NEW cov: 10986 ft: 15898 corp: 6/21b lim: 4 exec/s: 33 rss: 74Mb L: 4/4 MS: 1 ChangeBinInt- 00:08:25.666 [2024-07-15 16:19:11.066412] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:25.666 [2024-07-15 16:19:11.066436] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:25.666 [2024-07-15 16:19:11.066454] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:25.666 #34 NEW cov: 10986 ft: 16401 corp: 7/25b lim: 4 exec/s: 34 rss: 74Mb L: 4/4 MS: 1 ChangeBinInt- 00:08:25.666 [2024-07-15 16:19:11.232853] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:25.666 [2024-07-15 16:19:11.232875] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:25.666 [2024-07-15 16:19:11.232892] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:25.924 #35 NEW cov: 10986 ft: 17188 corp: 8/29b lim: 4 exec/s: 35 rss: 74Mb L: 4/4 MS: 1 ChangeBit- 00:08:25.924 [2024-07-15 16:19:11.403017] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:25.924 [2024-07-15 16:19:11.403040] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:25.924 [2024-07-15 16:19:11.403057] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:25.924 #36 NEW cov: 10986 ft: 17299 corp: 9/33b lim: 4 exec/s: 36 rss: 74Mb L: 4/4 MS: 1 CMP- DE: "\000\000"- 00:08:26.183 [2024-07-15 16:19:11.570768] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:26.183 [2024-07-15 16:19:11.570792] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:26.183 [2024-07-15 16:19:11.570809] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:26.183 #37 NEW cov: 10993 ft: 17623 corp: 10/37b lim: 4 exec/s: 37 rss: 74Mb L: 4/4 MS: 1 ChangeBinInt- 00:08:26.183 [2024-07-15 16:19:11.744281] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:26.183 [2024-07-15 16:19:11.744306] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:26.183 [2024-07-15 16:19:11.744324] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:26.442 #41 NEW cov: 10993 ft: 17947 corp: 11/41b lim: 4 exec/s: 20 rss: 74Mb L: 4/4 MS: 4 ShuffleBytes-ShuffleBytes-ChangeByte-CrossOver- 00:08:26.442 #41 DONE cov: 10993 ft: 17947 corp: 11/41b lim: 4 exec/s: 20 rss: 74Mb 00:08:26.442 ###### Recommended dictionary. ###### 00:08:26.442 "\037\000" # Uses: 5 00:08:26.442 "\000\000" # Uses: 0 00:08:26.442 ###### End of recommended dictionary. ###### 00:08:26.442 Done 41 runs in 2 second(s) 00:08:26.442 [2024-07-15 16:19:11.877726] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: disabling controller 00:08:26.701 16:19:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-1 /var/tmp/suppress_vfio_fuzz 00:08:26.701 16:19:12 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:26.701 16:19:12 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:26.701 16:19:12 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:08:26.701 16:19:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=2 00:08:26.701 16:19:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:26.701 16:19:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:26.701 16:19:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:08:26.701 16:19:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-2 00:08:26.701 16:19:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-2/domain/1 00:08:26.701 16:19:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-2/domain/2 00:08:26.701 16:19:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-2/fuzz_vfio_json.conf 00:08:26.701 16:19:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:26.701 16:19:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:26.701 16:19:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-2 /tmp/vfio-user-2/domain/1 /tmp/vfio-user-2/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:08:26.701 16:19:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-2/domain/1%; 00:08:26.701 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-2/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:26.701 16:19:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:26.701 16:19:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:26.701 16:19:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-2/domain/1 -c /tmp/vfio-user-2/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 -Y /tmp/vfio-user-2/domain/2 -r /tmp/vfio-user-2/spdk2.sock -Z 2 00:08:26.701 [2024-07-15 16:19:12.185537] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:08:26.701 [2024-07-15 16:19:12.185632] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1524448 ] 00:08:26.701 EAL: No free 2048 kB hugepages reported on node 1 00:08:26.701 [2024-07-15 16:19:12.265962] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.961 [2024-07-15 16:19:12.352782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.220 INFO: Running with entropic power schedule (0xFF, 100). 00:08:27.220 INFO: Seed: 1164058418 00:08:27.220 INFO: Loaded 1 modules (355086 inline 8-bit counters): 355086 [0x296db0c, 0x29c461a), 00:08:27.220 INFO: Loaded 1 PC tables (355086 PCs): 355086 [0x29c4620,0x2f2f700), 00:08:27.220 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:08:27.220 INFO: A corpus is not provided, starting from an empty corpus 00:08:27.220 #2 INITED exec/s: 0 rss: 66Mb 00:08:27.220 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:27.220 This may also happen if the target rejected all inputs we tried so far 00:08:27.220 [2024-07-15 16:19:12.604156] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: enabling controller 00:08:27.220 [2024-07-15 16:19:12.656592] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-2/domain/1: msg0: no payload for cmd5 00:08:27.220 [2024-07-15 16:19:12.656628] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 5 return failure 00:08:27.479 NEW_FUNC[1/661]: 0x484820 in fuzz_vfio_user_get_region_info /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:103 00:08:27.479 NEW_FUNC[2/661]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:27.479 #37 NEW cov: 10932 ft: 10861 corp: 2/9b lim: 8 exec/s: 0 rss: 72Mb L: 8/8 MS: 5 InsertByte-InsertRepeatedBytes-ChangeByte-CMP-CopyPart- DE: "\026\000\000\000"- 00:08:27.738 [2024-07-15 16:19:13.127427] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-2/domain/1: msg0: no payload for cmd5 00:08:27.738 [2024-07-15 16:19:13.127475] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 5 return failure 00:08:27.738 #38 NEW cov: 10961 ft: 14596 corp: 3/17b lim: 8 exec/s: 0 rss: 73Mb L: 8/8 MS: 1 CMP- DE: "\347o|\005\000\000\000\000"- 00:08:27.738 [2024-07-15 16:19:13.308125] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-2/domain/1: msg0: no payload for cmd5 00:08:27.738 [2024-07-15 16:19:13.308157] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 5 return failure 00:08:27.996 NEW_FUNC[1/1]: 0x1a4b020 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:27.996 #39 NEW cov: 10978 ft: 15973 corp: 4/25b lim: 8 exec/s: 0 rss: 74Mb L: 8/8 MS: 1 ChangeByte- 00:08:27.996 [2024-07-15 16:19:13.500480] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-2/domain/1: msg0: no payload for cmd5 00:08:27.996 [2024-07-15 16:19:13.500512] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 5 return failure 00:08:28.254 #40 NEW cov: 10978 ft: 16284 corp: 5/33b lim: 8 exec/s: 40 rss: 74Mb L: 8/8 MS: 1 PersAutoDict- DE: "\347o|\005\000\000\000\000"- 00:08:28.254 [2024-07-15 16:19:13.675454] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-2/domain/1: msg0: no payload for cmd5 00:08:28.254 [2024-07-15 16:19:13.675484] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 5 return failure 00:08:28.254 #41 NEW cov: 10978 ft: 16797 corp: 6/41b lim: 8 exec/s: 41 rss: 74Mb L: 8/8 MS: 1 PersAutoDict- DE: "\026\000\000\000"- 00:08:28.512 [2024-07-15 16:19:13.849065] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:28.512 #46 NEW cov: 10979 ft: 16940 corp: 7/49b lim: 8 exec/s: 46 rss: 74Mb L: 8/8 MS: 5 EraseBytes-EraseBytes-ChangeBinInt-InsertRepeatedBytes-CrossOver- 00:08:28.512 [2024-07-15 16:19:14.023771] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:28.770 #47 NEW cov: 10979 ft: 17102 corp: 8/57b lim: 8 exec/s: 47 rss: 75Mb L: 8/8 MS: 1 ChangeBinInt- 00:08:28.770 [2024-07-15 16:19:14.210095] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:28.770 #48 NEW cov: 10979 ft: 17426 corp: 9/65b lim: 8 exec/s: 48 rss: 75Mb L: 8/8 MS: 1 ChangeBinInt- 00:08:29.028 [2024-07-15 16:19:14.386560] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:29.028 #53 NEW cov: 10986 ft: 17472 corp: 10/73b lim: 8 exec/s: 53 rss: 75Mb L: 8/8 MS: 5 EraseBytes-ShuffleBytes-ChangeBit-ChangeByte-CopyPart- 00:08:29.028 [2024-07-15 16:19:14.564717] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-2/domain/1: msg0: no payload for cmd5 00:08:29.028 [2024-07-15 16:19:14.564749] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 5 return failure 00:08:29.288 #54 NEW cov: 10986 ft: 17835 corp: 11/81b lim: 8 exec/s: 27 rss: 75Mb L: 8/8 MS: 1 ShuffleBytes- 00:08:29.289 #54 DONE cov: 10986 ft: 17835 corp: 11/81b lim: 8 exec/s: 27 rss: 75Mb 00:08:29.289 ###### Recommended dictionary. ###### 00:08:29.289 "\026\000\000\000" # Uses: 1 00:08:29.289 "\347o|\005\000\000\000\000" # Uses: 1 00:08:29.289 ###### End of recommended dictionary. ###### 00:08:29.289 Done 54 runs in 2 second(s) 00:08:29.289 [2024-07-15 16:19:14.689730] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: disabling controller 00:08:29.585 16:19:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-2 /var/tmp/suppress_vfio_fuzz 00:08:29.585 16:19:14 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:29.585 16:19:14 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:29.585 16:19:14 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:08:29.585 16:19:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=3 00:08:29.585 16:19:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:29.585 16:19:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:29.585 16:19:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:08:29.585 16:19:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-3 00:08:29.585 16:19:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-3/domain/1 00:08:29.585 16:19:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-3/domain/2 00:08:29.585 16:19:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-3/fuzz_vfio_json.conf 00:08:29.585 16:19:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:29.585 16:19:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:29.585 16:19:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-3 /tmp/vfio-user-3/domain/1 /tmp/vfio-user-3/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:08:29.585 16:19:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-3/domain/1%; 00:08:29.585 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-3/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:29.585 16:19:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:29.585 16:19:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:29.585 16:19:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-3/domain/1 -c /tmp/vfio-user-3/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 -Y /tmp/vfio-user-3/domain/2 -r /tmp/vfio-user-3/spdk3.sock -Z 3 00:08:29.585 [2024-07-15 16:19:15.005477] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:08:29.585 [2024-07-15 16:19:15.005580] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1524808 ] 00:08:29.585 EAL: No free 2048 kB hugepages reported on node 1 00:08:29.585 [2024-07-15 16:19:15.084451] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.860 [2024-07-15 16:19:15.172464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.860 INFO: Running with entropic power schedule (0xFF, 100). 00:08:29.860 INFO: Seed: 3988068724 00:08:29.860 INFO: Loaded 1 modules (355086 inline 8-bit counters): 355086 [0x296db0c, 0x29c461a), 00:08:29.860 INFO: Loaded 1 PC tables (355086 PCs): 355086 [0x29c4620,0x2f2f700), 00:08:29.860 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:08:29.860 INFO: A corpus is not provided, starting from an empty corpus 00:08:29.860 #2 INITED exec/s: 0 rss: 66Mb 00:08:29.860 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:29.860 This may also happen if the target rejected all inputs we tried so far 00:08:29.860 [2024-07-15 16:19:15.428056] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: enabling controller 00:08:30.382 NEW_FUNC[1/660]: 0x484f00 in fuzz_vfio_user_dma_map /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:124 00:08:30.382 NEW_FUNC[2/660]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:30.382 #52 NEW cov: 10946 ft: 10732 corp: 2/33b lim: 32 exec/s: 0 rss: 72Mb L: 32/32 MS: 5 InsertByte-CrossOver-ChangeByte-ChangeByte-InsertRepeatedBytes- 00:08:30.641 #53 NEW cov: 10960 ft: 14852 corp: 3/65b lim: 32 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\000"- 00:08:30.899 NEW_FUNC[1/1]: 0x1a4b020 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:30.899 #54 NEW cov: 10977 ft: 15937 corp: 4/97b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 1 ChangeByte- 00:08:30.899 #55 NEW cov: 10977 ft: 16664 corp: 5/129b lim: 32 exec/s: 55 rss: 74Mb L: 32/32 MS: 1 ShuffleBytes- 00:08:30.899 [2024-07-15 16:19:16.472032] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to memory map DMA region [0xa00000000000000, 0xa00000100000000) fd=325 offset=0xa00000000000000 prot=0x3: Cannot allocate memory 00:08:30.899 [2024-07-15 16:19:16.472075] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0xa00000000000000, 0xa00000100000000) offset=0xa00000000000000 flags=0x3: Cannot allocate memory 00:08:30.899 [2024-07-15 16:19:16.472087] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: Cannot allocate memory 00:08:30.899 [2024-07-15 16:19:16.472104] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:08:31.158 NEW_FUNC[1/1]: 0x13e1ca0 in vfio_user_log /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/vfio_user.c:3094 00:08:31.158 #56 NEW cov: 10988 ft: 17138 corp: 6/161b lim: 32 exec/s: 56 rss: 74Mb L: 32/32 MS: 1 CrossOver- 00:08:31.415 #57 NEW cov: 10988 ft: 17527 corp: 7/193b lim: 32 exec/s: 57 rss: 74Mb L: 32/32 MS: 1 ChangeBinInt- 00:08:31.415 [2024-07-15 16:19:16.826068] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: DMA region size 72057594037927936 > max 8796093022208 00:08:31.415 [2024-07-15 16:19:16.826098] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0xa00000000000000, 0xb00000000000000) offset=0xa00000000000000 flags=0x3: No space left on device 00:08:31.415 [2024-07-15 16:19:16.826110] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: No space left on device 00:08:31.415 [2024-07-15 16:19:16.826127] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:08:31.415 #58 NEW cov: 10988 ft: 17798 corp: 8/225b lim: 32 exec/s: 58 rss: 74Mb L: 32/32 MS: 1 ShuffleBytes- 00:08:31.674 [2024-07-15 16:19:16.994201] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to memory map DMA region [0xa00000000000100, 0xa00000100000100) fd=325 offset=0xa00000000000000 prot=0x3: Cannot allocate memory 00:08:31.674 [2024-07-15 16:19:16.994226] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0xa00000000000100, 0xa00000100000100) offset=0xa00000000000000 flags=0x3: Cannot allocate memory 00:08:31.674 [2024-07-15 16:19:16.994241] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: Cannot allocate memory 00:08:31.674 [2024-07-15 16:19:16.994258] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:08:31.674 #59 NEW cov: 10988 ft: 17897 corp: 9/257b lim: 32 exec/s: 59 rss: 74Mb L: 32/32 MS: 1 CopyPart- 00:08:31.674 [2024-07-15 16:19:17.165925] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to memory map DMA region [0xa00000000000100, 0xa00000100000100) fd=325 offset=0xa00000000000000 prot=0x3: Permission denied 00:08:31.674 [2024-07-15 16:19:17.165949] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0xa00000000000100, 0xa00000100000100) offset=0xa00000000000000 flags=0x3: Permission denied 00:08:31.674 [2024-07-15 16:19:17.165961] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: Permission denied 00:08:31.674 [2024-07-15 16:19:17.165977] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:08:31.934 #60 NEW cov: 10995 ft: 17976 corp: 10/289b lim: 32 exec/s: 60 rss: 74Mb L: 32/32 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:08:31.934 [2024-07-15 16:19:17.336127] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: DMA region size 18446742978492891136 > max 8796093022208 00:08:31.934 [2024-07-15 16:19:17.336152] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0xa00000000000100, 0x9ffff0100000100) offset=0xa00000000000023 flags=0x3: No space left on device 00:08:31.934 [2024-07-15 16:19:17.336164] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: No space left on device 00:08:31.934 [2024-07-15 16:19:17.336181] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:08:31.934 #61 NEW cov: 10995 ft: 17996 corp: 11/321b lim: 32 exec/s: 30 rss: 74Mb L: 32/32 MS: 1 CMP- DE: "\377\377\377#"- 00:08:31.934 #61 DONE cov: 10995 ft: 17996 corp: 11/321b lim: 32 exec/s: 30 rss: 74Mb 00:08:31.934 ###### Recommended dictionary. ###### 00:08:31.934 "\001\000\000\000\000\000\000\000" # Uses: 1 00:08:31.934 "\377\377\377#" # Uses: 0 00:08:31.934 ###### End of recommended dictionary. ###### 00:08:31.934 Done 61 runs in 2 second(s) 00:08:31.934 [2024-07-15 16:19:17.453731] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: disabling controller 00:08:32.193 16:19:17 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-3 /var/tmp/suppress_vfio_fuzz 00:08:32.193 16:19:17 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:32.193 16:19:17 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:32.193 16:19:17 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:08:32.193 16:19:17 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=4 00:08:32.193 16:19:17 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:32.193 16:19:17 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:32.193 16:19:17 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:08:32.193 16:19:17 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-4 00:08:32.193 16:19:17 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-4/domain/1 00:08:32.193 16:19:17 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-4/domain/2 00:08:32.193 16:19:17 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-4/fuzz_vfio_json.conf 00:08:32.193 16:19:17 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:32.193 16:19:17 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:32.193 16:19:17 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-4 /tmp/vfio-user-4/domain/1 /tmp/vfio-user-4/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:08:32.193 16:19:17 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-4/domain/1%; 00:08:32.193 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-4/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:32.193 16:19:17 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:32.193 16:19:17 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:32.193 16:19:17 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-4/domain/1 -c /tmp/vfio-user-4/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 -Y /tmp/vfio-user-4/domain/2 -r /tmp/vfio-user-4/spdk4.sock -Z 4 00:08:32.452 [2024-07-15 16:19:17.776558] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:08:32.452 [2024-07-15 16:19:17.776639] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1525178 ] 00:08:32.452 EAL: No free 2048 kB hugepages reported on node 1 00:08:32.452 [2024-07-15 16:19:17.856347] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.452 [2024-07-15 16:19:17.940997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.711 INFO: Running with entropic power schedule (0xFF, 100). 00:08:32.711 INFO: Seed: 2455094356 00:08:32.711 INFO: Loaded 1 modules (355086 inline 8-bit counters): 355086 [0x296db0c, 0x29c461a), 00:08:32.711 INFO: Loaded 1 PC tables (355086 PCs): 355086 [0x29c4620,0x2f2f700), 00:08:32.711 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:08:32.711 INFO: A corpus is not provided, starting from an empty corpus 00:08:32.711 #2 INITED exec/s: 0 rss: 66Mb 00:08:32.711 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:32.711 This may also happen if the target rejected all inputs we tried so far 00:08:32.711 [2024-07-15 16:19:18.188291] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: enabling controller 00:08:33.229 NEW_FUNC[1/660]: 0x485780 in fuzz_vfio_user_dma_unmap /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:144 00:08:33.229 NEW_FUNC[2/660]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:33.229 #164 NEW cov: 10948 ft: 10899 corp: 2/33b lim: 32 exec/s: 0 rss: 72Mb L: 32/32 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:08:33.488 #165 NEW cov: 10962 ft: 14466 corp: 3/65b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 1 CopyPart- 00:08:33.488 NEW_FUNC[1/1]: 0x1a4b020 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:33.488 #166 NEW cov: 10979 ft: 15745 corp: 4/97b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 1 ShuffleBytes- 00:08:33.747 #177 NEW cov: 10979 ft: 16219 corp: 5/129b lim: 32 exec/s: 177 rss: 74Mb L: 32/32 MS: 1 CopyPart- 00:08:34.007 #178 NEW cov: 10979 ft: 16651 corp: 6/161b lim: 32 exec/s: 178 rss: 75Mb L: 32/32 MS: 1 ChangeBit- 00:08:34.007 #184 NEW cov: 10979 ft: 16730 corp: 7/193b lim: 32 exec/s: 184 rss: 75Mb L: 32/32 MS: 1 CopyPart- 00:08:34.284 #186 NEW cov: 10979 ft: 16920 corp: 8/225b lim: 32 exec/s: 186 rss: 75Mb L: 32/32 MS: 2 EraseBytes-CopyPart- 00:08:34.543 #189 NEW cov: 10979 ft: 17086 corp: 9/257b lim: 32 exec/s: 189 rss: 75Mb L: 32/32 MS: 3 EraseBytes-ChangeByte-InsertRepeatedBytes- 00:08:34.543 #190 NEW cov: 10986 ft: 17172 corp: 10/289b lim: 32 exec/s: 190 rss: 75Mb L: 32/32 MS: 1 ChangeBit- 00:08:34.802 #191 NEW cov: 10986 ft: 17305 corp: 11/321b lim: 32 exec/s: 95 rss: 75Mb L: 32/32 MS: 1 CrossOver- 00:08:34.802 #191 DONE cov: 10986 ft: 17305 corp: 11/321b lim: 32 exec/s: 95 rss: 75Mb 00:08:34.802 Done 191 runs in 2 second(s) 00:08:34.802 [2024-07-15 16:19:20.230728] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: disabling controller 00:08:35.062 16:19:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-4 /var/tmp/suppress_vfio_fuzz 00:08:35.062 16:19:20 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:35.062 16:19:20 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:35.062 16:19:20 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:08:35.062 16:19:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=5 00:08:35.062 16:19:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:35.062 16:19:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:35.062 16:19:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:08:35.062 16:19:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-5 00:08:35.062 16:19:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-5/domain/1 00:08:35.062 16:19:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-5/domain/2 00:08:35.062 16:19:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-5/fuzz_vfio_json.conf 00:08:35.062 16:19:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:35.062 16:19:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:35.062 16:19:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-5 /tmp/vfio-user-5/domain/1 /tmp/vfio-user-5/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:08:35.062 16:19:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-5/domain/1%; 00:08:35.062 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-5/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:35.062 16:19:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:35.062 16:19:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:35.062 16:19:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-5/domain/1 -c /tmp/vfio-user-5/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 -Y /tmp/vfio-user-5/domain/2 -r /tmp/vfio-user-5/spdk5.sock -Z 5 00:08:35.062 [2024-07-15 16:19:20.557614] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:08:35.062 [2024-07-15 16:19:20.557712] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1525538 ] 00:08:35.062 EAL: No free 2048 kB hugepages reported on node 1 00:08:35.062 [2024-07-15 16:19:20.635811] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.321 [2024-07-15 16:19:20.722658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.580 INFO: Running with entropic power schedule (0xFF, 100). 00:08:35.580 INFO: Seed: 945121085 00:08:35.580 INFO: Loaded 1 modules (355086 inline 8-bit counters): 355086 [0x296db0c, 0x29c461a), 00:08:35.580 INFO: Loaded 1 PC tables (355086 PCs): 355086 [0x29c4620,0x2f2f700), 00:08:35.580 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:08:35.580 INFO: A corpus is not provided, starting from an empty corpus 00:08:35.580 #2 INITED exec/s: 0 rss: 66Mb 00:08:35.580 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:35.580 This may also happen if the target rejected all inputs we tried so far 00:08:35.581 [2024-07-15 16:19:20.975191] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: enabling controller 00:08:35.581 [2024-07-15 16:19:21.029609] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:35.581 [2024-07-15 16:19:21.029644] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:35.841 NEW_FUNC[1/661]: 0x486180 in fuzz_vfio_user_irq_set /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:171 00:08:35.841 NEW_FUNC[2/661]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:35.841 #13 NEW cov: 10954 ft: 10926 corp: 2/14b lim: 13 exec/s: 0 rss: 72Mb L: 13/13 MS: 1 InsertRepeatedBytes- 00:08:36.099 [2024-07-15 16:19:21.495567] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:36.099 [2024-07-15 16:19:21.495628] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:36.099 #14 NEW cov: 10968 ft: 14139 corp: 3/27b lim: 13 exec/s: 0 rss: 73Mb L: 13/13 MS: 1 ShuffleBytes- 00:08:36.099 [2024-07-15 16:19:21.665065] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:36.099 [2024-07-15 16:19:21.665098] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:36.358 NEW_FUNC[1/1]: 0x1a4b020 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:36.358 #20 NEW cov: 10988 ft: 15838 corp: 4/40b lim: 13 exec/s: 0 rss: 74Mb L: 13/13 MS: 1 ChangeBinInt- 00:08:36.358 [2024-07-15 16:19:21.835808] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:36.358 [2024-07-15 16:19:21.835839] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:36.616 #21 NEW cov: 10988 ft: 16679 corp: 5/53b lim: 13 exec/s: 21 rss: 74Mb L: 13/13 MS: 1 ChangeByte- 00:08:36.616 [2024-07-15 16:19:22.022552] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:36.616 [2024-07-15 16:19:22.022582] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:36.616 #22 NEW cov: 10988 ft: 17172 corp: 6/66b lim: 13 exec/s: 22 rss: 74Mb L: 13/13 MS: 1 ChangeBinInt- 00:08:36.616 [2024-07-15 16:19:22.192766] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:36.616 [2024-07-15 16:19:22.192798] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:36.875 #23 NEW cov: 10988 ft: 17389 corp: 7/79b lim: 13 exec/s: 23 rss: 75Mb L: 13/13 MS: 1 ShuffleBytes- 00:08:36.875 [2024-07-15 16:19:22.359760] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:36.875 [2024-07-15 16:19:22.359790] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:37.134 #34 NEW cov: 10988 ft: 17469 corp: 8/92b lim: 13 exec/s: 34 rss: 75Mb L: 13/13 MS: 1 ChangeBinInt- 00:08:37.134 [2024-07-15 16:19:22.538719] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:37.134 [2024-07-15 16:19:22.538750] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:37.134 #40 NEW cov: 10988 ft: 17712 corp: 9/105b lim: 13 exec/s: 40 rss: 75Mb L: 13/13 MS: 1 ChangeBit- 00:08:37.134 [2024-07-15 16:19:22.710034] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:37.134 [2024-07-15 16:19:22.710066] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:37.393 #46 NEW cov: 10995 ft: 17764 corp: 10/118b lim: 13 exec/s: 46 rss: 75Mb L: 13/13 MS: 1 ChangeBinInt- 00:08:37.393 [2024-07-15 16:19:22.875414] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:37.393 [2024-07-15 16:19:22.875445] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:37.652 #56 NEW cov: 10995 ft: 18001 corp: 11/131b lim: 13 exec/s: 28 rss: 75Mb L: 13/13 MS: 5 EraseBytes-CopyPart-CrossOver-ChangeByte-CopyPart- 00:08:37.652 #56 DONE cov: 10995 ft: 18001 corp: 11/131b lim: 13 exec/s: 28 rss: 75Mb 00:08:37.652 Done 56 runs in 2 second(s) 00:08:37.652 [2024-07-15 16:19:22.994736] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: disabling controller 00:08:37.910 16:19:23 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-5 /var/tmp/suppress_vfio_fuzz 00:08:37.910 16:19:23 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:37.910 16:19:23 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:37.910 16:19:23 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:08:37.910 16:19:23 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=6 00:08:37.910 16:19:23 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:37.910 16:19:23 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:37.910 16:19:23 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:08:37.910 16:19:23 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-6 00:08:37.910 16:19:23 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-6/domain/1 00:08:37.910 16:19:23 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-6/domain/2 00:08:37.910 16:19:23 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-6/fuzz_vfio_json.conf 00:08:37.910 16:19:23 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:37.910 16:19:23 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:37.910 16:19:23 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-6 /tmp/vfio-user-6/domain/1 /tmp/vfio-user-6/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:08:37.910 16:19:23 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-6/domain/1%; 00:08:37.910 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-6/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:37.910 16:19:23 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:37.910 16:19:23 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:37.910 16:19:23 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-6/domain/1 -c /tmp/vfio-user-6/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 -Y /tmp/vfio-user-6/domain/2 -r /tmp/vfio-user-6/spdk6.sock -Z 6 00:08:37.910 [2024-07-15 16:19:23.323181] Starting SPDK v24.09-pre git sha1 24034319f / DPDK 24.03.0 initialization... 00:08:37.910 [2024-07-15 16:19:23.323261] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1525893 ] 00:08:37.910 EAL: No free 2048 kB hugepages reported on node 1 00:08:37.910 [2024-07-15 16:19:23.403494] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.170 [2024-07-15 16:19:23.492493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.170 INFO: Running with entropic power schedule (0xFF, 100). 00:08:38.170 INFO: Seed: 3704125011 00:08:38.170 INFO: Loaded 1 modules (355086 inline 8-bit counters): 355086 [0x296db0c, 0x29c461a), 00:08:38.170 INFO: Loaded 1 PC tables (355086 PCs): 355086 [0x29c4620,0x2f2f700), 00:08:38.170 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:08:38.170 INFO: A corpus is not provided, starting from an empty corpus 00:08:38.170 #2 INITED exec/s: 0 rss: 66Mb 00:08:38.170 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:38.170 This may also happen if the target rejected all inputs we tried so far 00:08:38.170 [2024-07-15 16:19:23.732382] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: enabling controller 00:08:38.429 [2024-07-15 16:19:23.785602] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:38.429 [2024-07-15 16:19:23.785637] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:38.696 NEW_FUNC[1/661]: 0x486e70 in fuzz_vfio_user_set_msix /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:190 00:08:38.696 NEW_FUNC[2/661]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:38.696 #9 NEW cov: 10948 ft: 10902 corp: 2/10b lim: 9 exec/s: 0 rss: 72Mb L: 9/9 MS: 2 CopyPart-CMP- DE: "\026=\000\000\000\000\000\000"- 00:08:38.696 [2024-07-15 16:19:24.268673] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:38.696 [2024-07-15 16:19:24.268722] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:38.954 #27 NEW cov: 10963 ft: 13615 corp: 3/19b lim: 9 exec/s: 0 rss: 73Mb L: 9/9 MS: 3 CrossOver-ShuffleBytes-PersAutoDict- DE: "\026=\000\000\000\000\000\000"- 00:08:38.954 [2024-07-15 16:19:24.452704] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:38.954 [2024-07-15 16:19:24.452737] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:39.212 NEW_FUNC[1/1]: 0x1a4b020 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:39.212 #28 NEW cov: 10980 ft: 15199 corp: 4/28b lim: 9 exec/s: 0 rss: 74Mb L: 9/9 MS: 1 ChangeBinInt- 00:08:39.212 [2024-07-15 16:19:24.630338] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:39.212 [2024-07-15 16:19:24.630370] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:39.212 #29 NEW cov: 10980 ft: 16494 corp: 5/37b lim: 9 exec/s: 29 rss: 74Mb L: 9/9 MS: 1 ShuffleBytes- 00:08:39.470 [2024-07-15 16:19:24.812559] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:39.470 [2024-07-15 16:19:24.812590] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:39.470 #30 NEW cov: 10980 ft: 16668 corp: 6/46b lim: 9 exec/s: 30 rss: 76Mb L: 9/9 MS: 1 ChangeBit- 00:08:39.470 [2024-07-15 16:19:24.983160] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:39.470 [2024-07-15 16:19:24.983191] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:39.729 #33 NEW cov: 10980 ft: 17002 corp: 7/55b lim: 9 exec/s: 33 rss: 76Mb L: 9/9 MS: 3 InsertByte-CopyPart-CrossOver- 00:08:39.729 [2024-07-15 16:19:25.167397] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:39.729 [2024-07-15 16:19:25.167454] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:39.729 #34 NEW cov: 10980 ft: 17319 corp: 8/64b lim: 9 exec/s: 34 rss: 76Mb L: 9/9 MS: 1 ChangeBinInt- 00:08:39.988 [2024-07-15 16:19:25.344336] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:39.988 [2024-07-15 16:19:25.344367] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:39.988 #38 NEW cov: 10980 ft: 17604 corp: 9/73b lim: 9 exec/s: 38 rss: 76Mb L: 9/9 MS: 4 InsertRepeatedBytes-InsertRepeatedBytes-ChangeBinInt-CrossOver- 00:08:39.988 [2024-07-15 16:19:25.526238] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:39.988 [2024-07-15 16:19:25.526269] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:40.247 #39 NEW cov: 10987 ft: 17744 corp: 10/82b lim: 9 exec/s: 39 rss: 76Mb L: 9/9 MS: 1 ChangeBinInt- 00:08:40.247 [2024-07-15 16:19:25.703792] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:40.247 [2024-07-15 16:19:25.703824] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:40.247 #45 NEW cov: 10987 ft: 17764 corp: 11/91b lim: 9 exec/s: 22 rss: 76Mb L: 9/9 MS: 1 CrossOver- 00:08:40.247 #45 DONE cov: 10987 ft: 17764 corp: 11/91b lim: 9 exec/s: 22 rss: 76Mb 00:08:40.247 ###### Recommended dictionary. ###### 00:08:40.247 "\026=\000\000\000\000\000\000" # Uses: 1 00:08:40.247 ###### End of recommended dictionary. ###### 00:08:40.247 Done 45 runs in 2 second(s) 00:08:40.507 [2024-07-15 16:19:25.829741] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: disabling controller 00:08:40.766 16:19:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-6 /var/tmp/suppress_vfio_fuzz 00:08:40.766 16:19:26 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:40.766 16:19:26 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:40.766 16:19:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:08:40.766 00:08:40.766 real 0m19.855s 00:08:40.766 user 0m27.329s 00:08:40.766 sys 0m2.010s 00:08:40.766 16:19:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:40.766 16:19:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:40.766 ************************************ 00:08:40.766 END TEST vfio_llvm_fuzz 00:08:40.766 ************************************ 00:08:40.766 16:19:26 llvm_fuzz -- common/autotest_common.sh@1142 -- # return 0 00:08:40.766 16:19:26 llvm_fuzz -- fuzz/llvm.sh@67 -- # [[ 1 -eq 0 ]] 00:08:40.766 00:08:40.766 real 1m25.002s 00:08:40.766 user 2m8.097s 00:08:40.766 sys 0m9.540s 00:08:40.766 16:19:26 llvm_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:40.766 16:19:26 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:40.766 ************************************ 00:08:40.766 END TEST llvm_fuzz 00:08:40.766 ************************************ 00:08:40.766 16:19:26 -- common/autotest_common.sh@1142 -- # return 0 00:08:40.766 16:19:26 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:08:40.766 16:19:26 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:08:40.766 16:19:26 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:08:40.766 16:19:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:40.766 16:19:26 -- common/autotest_common.sh@10 -- # set +x 00:08:40.766 16:19:26 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:08:40.766 16:19:26 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:08:40.766 16:19:26 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:08:40.766 16:19:26 -- common/autotest_common.sh@10 -- # set +x 00:08:46.055 INFO: APP EXITING 00:08:46.055 INFO: killing all VMs 00:08:46.055 INFO: killing vhost app 00:08:46.055 INFO: EXIT DONE 00:08:48.627 Waiting for block devices as requested 00:08:48.627 0000:1a:00.0 (8086 0a54): vfio-pci -> nvme 00:08:48.627 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:08:48.627 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:08:48.885 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:08:48.885 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:08:48.885 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:08:48.885 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:08:49.144 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:08:49.144 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:08:49.144 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:08:49.402 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:08:49.402 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:08:49.402 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:08:49.660 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:08:49.660 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:08:49.660 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:08:49.917 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:08:55.191 Cleaning 00:08:55.191 Removing: /dev/shm/spdk_tgt_trace.pid1497984 00:08:55.191 Removing: /var/run/dpdk/spdk_pid1495669 00:08:55.191 Removing: /var/run/dpdk/spdk_pid1496793 00:08:55.191 Removing: /var/run/dpdk/spdk_pid1497984 00:08:55.191 Removing: /var/run/dpdk/spdk_pid1498553 00:08:55.191 Removing: /var/run/dpdk/spdk_pid1499320 00:08:55.191 Removing: /var/run/dpdk/spdk_pid1499556 00:08:55.191 Removing: /var/run/dpdk/spdk_pid1500339 00:08:55.191 Removing: /var/run/dpdk/spdk_pid1500361 00:08:55.191 Removing: /var/run/dpdk/spdk_pid1500677 00:08:55.191 Removing: /var/run/dpdk/spdk_pid1500964 00:08:55.191 Removing: /var/run/dpdk/spdk_pid1501299 00:08:55.191 Removing: /var/run/dpdk/spdk_pid1501552 00:08:55.191 Removing: /var/run/dpdk/spdk_pid1501782 00:08:55.191 Removing: /var/run/dpdk/spdk_pid1501989 00:08:55.191 Removing: /var/run/dpdk/spdk_pid1502185 00:08:55.191 Removing: /var/run/dpdk/spdk_pid1502405 00:08:55.191 Removing: /var/run/dpdk/spdk_pid1502993 00:08:55.191 Removing: /var/run/dpdk/spdk_pid1505490 00:08:55.191 Removing: /var/run/dpdk/spdk_pid1505702 00:08:55.191 Removing: /var/run/dpdk/spdk_pid1505915 00:08:55.191 Removing: /var/run/dpdk/spdk_pid1506088 00:08:55.191 Removing: /var/run/dpdk/spdk_pid1506481 00:08:55.191 Removing: /var/run/dpdk/spdk_pid1506655 00:08:55.191 Removing: /var/run/dpdk/spdk_pid1507046 00:08:55.191 Removing: /var/run/dpdk/spdk_pid1507220 00:08:55.191 Removing: /var/run/dpdk/spdk_pid1507427 00:08:55.191 Removing: /var/run/dpdk/spdk_pid1507586 00:08:55.191 Removing: /var/run/dpdk/spdk_pid1507690 00:08:55.191 Removing: /var/run/dpdk/spdk_pid1507826 00:08:55.191 Removing: /var/run/dpdk/spdk_pid1508269 00:08:55.191 Removing: /var/run/dpdk/spdk_pid1508470 00:08:55.191 Removing: /var/run/dpdk/spdk_pid1508665 00:08:55.191 Removing: /var/run/dpdk/spdk_pid1508792 00:08:55.191 Removing: /var/run/dpdk/spdk_pid1509030 00:08:55.191 Removing: /var/run/dpdk/spdk_pid1509136 00:08:55.191 Removing: /var/run/dpdk/spdk_pid1509203 00:08:55.192 Removing: /var/run/dpdk/spdk_pid1509402 00:08:55.192 Removing: /var/run/dpdk/spdk_pid1509597 00:08:55.192 Removing: /var/run/dpdk/spdk_pid1509826 00:08:55.192 Removing: /var/run/dpdk/spdk_pid1510078 00:08:55.192 Removing: /var/run/dpdk/spdk_pid1510316 00:08:55.192 Removing: /var/run/dpdk/spdk_pid1510541 00:08:55.192 Removing: /var/run/dpdk/spdk_pid1510732 00:08:55.192 Removing: /var/run/dpdk/spdk_pid1510931 00:08:55.192 Removing: /var/run/dpdk/spdk_pid1511130 00:08:55.192 Removing: /var/run/dpdk/spdk_pid1511323 00:08:55.192 Removing: /var/run/dpdk/spdk_pid1511521 00:08:55.192 Removing: /var/run/dpdk/spdk_pid1511723 00:08:55.192 Removing: /var/run/dpdk/spdk_pid1511921 00:08:55.192 Removing: /var/run/dpdk/spdk_pid1512114 00:08:55.192 Removing: /var/run/dpdk/spdk_pid1512313 00:08:55.192 Removing: /var/run/dpdk/spdk_pid1512517 00:08:55.192 Removing: /var/run/dpdk/spdk_pid1512760 00:08:55.192 Removing: /var/run/dpdk/spdk_pid1513018 00:08:55.192 Removing: /var/run/dpdk/spdk_pid1513255 00:08:55.192 Removing: /var/run/dpdk/spdk_pid1513462 00:08:55.192 Removing: /var/run/dpdk/spdk_pid1513528 00:08:55.192 Removing: /var/run/dpdk/spdk_pid1513843 00:08:55.192 Removing: /var/run/dpdk/spdk_pid1514438 00:08:55.192 Removing: /var/run/dpdk/spdk_pid1514811 00:08:55.192 Removing: /var/run/dpdk/spdk_pid1515554 00:08:55.192 Removing: /var/run/dpdk/spdk_pid1515923 00:08:55.192 Removing: /var/run/dpdk/spdk_pid1516286 00:08:55.192 Removing: /var/run/dpdk/spdk_pid1516645 00:08:55.192 Removing: /var/run/dpdk/spdk_pid1517012 00:08:55.192 Removing: /var/run/dpdk/spdk_pid1517373 00:08:55.192 Removing: /var/run/dpdk/spdk_pid1517732 00:08:55.192 Removing: /var/run/dpdk/spdk_pid1518031 00:08:55.192 Removing: /var/run/dpdk/spdk_pid1518340 00:08:55.192 Removing: /var/run/dpdk/spdk_pid1518644 00:08:55.192 Removing: /var/run/dpdk/spdk_pid1519006 00:08:55.192 Removing: /var/run/dpdk/spdk_pid1519358 00:08:55.192 Removing: /var/run/dpdk/spdk_pid1519720 00:08:55.192 Removing: /var/run/dpdk/spdk_pid1520073 00:08:55.192 Removing: /var/run/dpdk/spdk_pid1520432 00:08:55.192 Removing: /var/run/dpdk/spdk_pid1520793 00:08:55.450 Removing: /var/run/dpdk/spdk_pid1521152 00:08:55.450 Removing: /var/run/dpdk/spdk_pid1521508 00:08:55.450 Removing: /var/run/dpdk/spdk_pid1521867 00:08:55.450 Removing: /var/run/dpdk/spdk_pid1522214 00:08:55.450 Removing: /var/run/dpdk/spdk_pid1522526 00:08:55.450 Removing: /var/run/dpdk/spdk_pid1522812 00:08:55.450 Removing: /var/run/dpdk/spdk_pid1523138 00:08:55.450 Removing: /var/run/dpdk/spdk_pid1523672 00:08:55.451 Removing: /var/run/dpdk/spdk_pid1524091 00:08:55.451 Removing: /var/run/dpdk/spdk_pid1524448 00:08:55.451 Removing: /var/run/dpdk/spdk_pid1524808 00:08:55.451 Removing: /var/run/dpdk/spdk_pid1525178 00:08:55.451 Removing: /var/run/dpdk/spdk_pid1525538 00:08:55.451 Removing: /var/run/dpdk/spdk_pid1525893 00:08:55.451 Clean 00:08:55.451 16:19:40 -- common/autotest_common.sh@1451 -- # return 0 00:08:55.451 16:19:40 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:08:55.451 16:19:40 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:55.451 16:19:40 -- common/autotest_common.sh@10 -- # set +x 00:08:55.451 16:19:40 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:08:55.451 16:19:40 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:55.451 16:19:40 -- common/autotest_common.sh@10 -- # set +x 00:08:55.451 16:19:41 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:08:55.451 16:19:41 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log ]] 00:08:55.451 16:19:41 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log 00:08:55.451 16:19:41 -- spdk/autotest.sh@391 -- # hash lcov 00:08:55.451 16:19:41 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=clang == *\c\l\a\n\g* ]] 00:08:55.710 16:19:41 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:08:55.710 16:19:41 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:08:55.710 16:19:41 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:55.710 16:19:41 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:55.710 16:19:41 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.710 16:19:41 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.710 16:19:41 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.710 16:19:41 -- paths/export.sh@5 -- $ export PATH 00:08:55.710 16:19:41 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.710 16:19:41 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:08:55.710 16:19:41 -- common/autobuild_common.sh@444 -- $ date +%s 00:08:55.710 16:19:41 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721053181.XXXXXX 00:08:55.710 16:19:41 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721053181.Ob9plP 00:08:55.710 16:19:41 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:08:55.710 16:19:41 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:08:55.710 16:19:41 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:08:55.710 16:19:41 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:08:55.710 16:19:41 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:08:55.710 16:19:41 -- common/autobuild_common.sh@460 -- $ get_config_params 00:08:55.710 16:19:41 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:08:55.710 16:19:41 -- common/autotest_common.sh@10 -- $ set +x 00:08:55.711 16:19:41 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:08:55.711 16:19:41 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:08:55.711 16:19:41 -- pm/common@17 -- $ local monitor 00:08:55.711 16:19:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:55.711 16:19:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:55.711 16:19:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:55.711 16:19:41 -- pm/common@21 -- $ date +%s 00:08:55.711 16:19:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:55.711 16:19:41 -- pm/common@21 -- $ date +%s 00:08:55.711 16:19:41 -- pm/common@25 -- $ sleep 1 00:08:55.711 16:19:41 -- pm/common@21 -- $ date +%s 00:08:55.711 16:19:41 -- pm/common@21 -- $ date +%s 00:08:55.711 16:19:41 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721053181 00:08:55.711 16:19:41 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721053181 00:08:55.711 16:19:41 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721053181 00:08:55.711 16:19:41 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721053181 00:08:55.711 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721053181_collect-vmstat.pm.log 00:08:55.711 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721053181_collect-cpu-load.pm.log 00:08:55.711 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721053181_collect-cpu-temp.pm.log 00:08:55.711 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721053181_collect-bmc-pm.bmc.pm.log 00:08:56.646 16:19:42 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:08:56.646 16:19:42 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j72 00:08:56.646 16:19:42 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:56.646 16:19:42 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:08:56.646 16:19:42 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:08:56.646 16:19:42 -- spdk/autopackage.sh@19 -- $ timing_finish 00:08:56.646 16:19:42 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:08:56.646 16:19:42 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:08:56.646 16:19:42 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:08:56.646 16:19:42 -- spdk/autopackage.sh@20 -- $ exit 0 00:08:56.646 16:19:42 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:08:56.646 16:19:42 -- pm/common@29 -- $ signal_monitor_resources TERM 00:08:56.646 16:19:42 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:08:56.646 16:19:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:56.646 16:19:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:08:56.646 16:19:42 -- pm/common@44 -- $ pid=1531697 00:08:56.646 16:19:42 -- pm/common@50 -- $ kill -TERM 1531697 00:08:56.646 16:19:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:56.646 16:19:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:08:56.646 16:19:42 -- pm/common@44 -- $ pid=1531700 00:08:56.646 16:19:42 -- pm/common@50 -- $ kill -TERM 1531700 00:08:56.646 16:19:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:56.646 16:19:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:08:56.646 16:19:42 -- pm/common@44 -- $ pid=1531702 00:08:56.646 16:19:42 -- pm/common@50 -- $ kill -TERM 1531702 00:08:56.646 16:19:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:56.646 16:19:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:08:56.646 16:19:42 -- pm/common@44 -- $ pid=1531738 00:08:56.646 16:19:42 -- pm/common@50 -- $ sudo -E kill -TERM 1531738 00:08:56.646 + [[ -n 1391131 ]] 00:08:56.646 + sudo kill 1391131 00:08:56.914 [Pipeline] } 00:08:56.928 [Pipeline] // stage 00:08:56.933 [Pipeline] } 00:08:56.949 [Pipeline] // timeout 00:08:56.955 [Pipeline] } 00:08:56.974 [Pipeline] // catchError 00:08:56.981 [Pipeline] } 00:08:56.997 [Pipeline] // wrap 00:08:57.004 [Pipeline] } 00:08:57.021 [Pipeline] // catchError 00:08:57.031 [Pipeline] stage 00:08:57.033 [Pipeline] { (Epilogue) 00:08:57.048 [Pipeline] catchError 00:08:57.050 [Pipeline] { 00:08:57.066 [Pipeline] echo 00:08:57.068 Cleanup processes 00:08:57.074 [Pipeline] sh 00:08:57.358 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:57.358 1531880 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/sdr.cache 00:08:57.358 1532656 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:57.373 [Pipeline] sh 00:08:57.652 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:57.652 ++ grep -v 'sudo pgrep' 00:08:57.652 ++ awk '{print $1}' 00:08:57.652 + sudo kill -9 1531880 00:08:57.664 [Pipeline] sh 00:08:57.947 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:08:58.893 [Pipeline] sh 00:08:59.176 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:08:59.176 Artifacts sizes are good 00:08:59.191 [Pipeline] archiveArtifacts 00:08:59.198 Archiving artifacts 00:08:59.306 [Pipeline] sh 00:08:59.591 + sudo chown -R sys_sgci /var/jenkins/workspace/short-fuzz-phy-autotest 00:08:59.606 [Pipeline] cleanWs 00:08:59.616 [WS-CLEANUP] Deleting project workspace... 00:08:59.616 [WS-CLEANUP] Deferred wipeout is used... 00:08:59.622 [WS-CLEANUP] done 00:08:59.624 [Pipeline] } 00:08:59.649 [Pipeline] // catchError 00:08:59.662 [Pipeline] sh 00:08:59.958 + logger -p user.info -t JENKINS-CI 00:09:00.024 [Pipeline] } 00:09:00.039 [Pipeline] // stage 00:09:00.045 [Pipeline] } 00:09:00.061 [Pipeline] // node 00:09:00.067 [Pipeline] End of Pipeline 00:09:00.103 Finished: SUCCESS